linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v12 00/12]
@ 2018-08-28 14:35 Petr Mladek
  2018-08-28 14:35 ` [PATCH v12 01/12] livepatch: Change void *new_func -> unsigned long new_addr in struct klp_func Petr Mladek
                   ` (12 more replies)
  0 siblings, 13 replies; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:35 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel, Petr Mladek

livepatch: Atomic replace feature

The atomic replace allows to create cumulative patches. They
are useful when you maintain many livepatches and want to remove
one that is lower on the stack. In addition it is very useful when
more patches touch the same function and there are dependencies
between them.

This version does another big refactoring based on feedback against
v11[*]. In particular, it removes the registration step, changes
the API and handling of livepatch dependencies. The aim is
to keep the number of possible variants on a sane level.
It helps the keep the feature "easy" to use and maintain.

[*] https://lkml.kernel.org/r/20180323120028.31451-1-pmladek@suse.com


Changes against v11:

  + Functional changes:

    + Livepatches get automatically unregistered when disabled.
      Note that the sysfs interface disappears at this point.
      It simplifies the API and code. The only drawback is that
      the patch can be enabled again only by reloading the module.

    + Refuse to load conflicting patches. The same function can
      be patched again only by a new cumulative patch that
      replaces all older ones.

    + Non-conflicting patches can be loaded and disabled in any
      order.
      

  + API related changes:

     + Change void *new_func -> unsigned long new_addr in
       struct klp_func.

     + Several new macros to hide implementation details and
       avoid casting when defining struct klp-func and klp_object.

     + Remove obsolete klp_register_patch() klp_unregister_patch() API


  + Change in selftest against v4:

     + Use new macros to define struct klp_func and klp_object.

     + Remove klp_register_patch()/klp_unregister_patch() calls.

     + Replace load_mod() + wait_for_transition() with three
       variants load_mod(), load_lp(), load_lp_nowait(). IMHO,
       it is easier to use because we need to detect the end
       of transaction another way after disable_lp() now.

     + Replaced unload_mod() with two variants unload_mod(),
       unload_lp() to match the above change.

     + Wait for the end of transition in disable_lp()
       instead of the unreliable check of the sysfs interface.

     Note that I did not touch the logs with expected result.
     They stay exactly the same as in v4 posted by Joe.
     I hope that it is a good sign ;-)


Changes against v10:

  + Bug fixes and functional changes:
    + Handle Nops in klp_ftrace_handled() to avoid infinite loop [Mirek]
    + Really add dynamically allocated klp_object into the list [Petr]
    + Clear patch->replace when transition finishes [Josh]

  + Refactoring and clean up [Josh]:
    + Replace enum types with bools
    + Avoid using ERR_PTR
    + Remove too paranoid warnings
    + Distinguish registered patches by a flag instead of a list
    + Squash some functions
    + Update comments, documentation, and commit messages
    + Squashed and split patches to do more controversial changes later

Changes against v9:

  + Fixed check of valid NOPs for already loaded objects,
    regression introduced in v9 [Joe, Mirek]
  + Allow to replace even disabled patches [Evgenii]

Changes against v8:

  + Fixed handling of statically defined struct klp_object
    with empty array of functions [Joe, Mirek]
  + Removed redundant func->new_func assignment for NOPs [Mirek]
  + Improved some wording [Mirek]

Changes against v7:

  + Fixed handling of NOPs for not-yet-loaded modules
  + Made klp_replaced_patches list static [Mirek]
  + Made klp_free_object() public later [Mirek]
  + Fixed several reported typos [Mirek, Joe]
  + Updated documentation according to the feedback [Joe]
  + Added some Acks [Mirek]

Changes against v6:

  + used list_move when disabling replaced patches [Jason]
  + renamed KLP_FUNC_ORIGINAL -> KLP_FUNC_STATIC [Mirek]
  + used klp_is_func_type() in klp_unpatch_object() [Mirek]
  + moved static definition of klp_get_or_add_object() [Mirek]
  + updated comment about synchronization in forced mode [Mirek]
  + added user documentation
  + fixed several typos


Jason Baron (2):
  livepatch: Use lists to manage patches, objects and functions
  livepatch: Add atomic replace

Joe Lawrence (1):
  selftests/livepatch: introduce tests

Petr Mladek (9):
  livepatch: Change void *new_func -> unsigned long new_addr in struct
    klp_func
  livepatch: Helper macros to define livepatch structures
  livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code
  livepatch: Consolidate klp_free functions
  livepatch: Refuse to unload only livepatches available during a forced
    transition
  livepatch: Simplify API by removing registration step
  livepatch: Remove Nop structures when unused
  livepatch: Atomic replace and cumulative patches documentation
  livepatch: Remove ordering and refuse loading conflicting patches

 Documentation/livepatch/callbacks.txt              | 489 +-----------
 Documentation/livepatch/cumulative-patches.txt     | 105 +++
 Documentation/livepatch/livepatch.txt              | 131 ++--
 MAINTAINERS                                        |   1 +
 include/linux/livepatch.h                          |  84 ++-
 kernel/livepatch/core.c                            | 833 ++++++++++++++-------
 kernel/livepatch/core.h                            |   4 +
 kernel/livepatch/patch.c                           |  41 +-
 kernel/livepatch/patch.h                           |   1 +
 kernel/livepatch/transition.c                      |  26 +-
 lib/Kconfig.debug                                  |  21 +
 lib/Makefile                                       |   2 +
 lib/livepatch/Makefile                             |  15 +
 lib/livepatch/test_klp_atomic_replace.c            |  53 ++
 lib/livepatch/test_klp_callbacks_busy.c            |  43 ++
 lib/livepatch/test_klp_callbacks_demo.c            | 109 +++
 lib/livepatch/test_klp_callbacks_demo2.c           |  89 +++
 lib/livepatch/test_klp_callbacks_mod.c             |  24 +
 lib/livepatch/test_klp_livepatch.c                 |  47 ++
 lib/livepatch/test_klp_shadow_vars.c               | 236 ++++++
 samples/livepatch/livepatch-callbacks-demo.c       |  68 +-
 samples/livepatch/livepatch-sample.c               |  26 +-
 samples/livepatch/livepatch-shadow-fix1.c          |  34 +-
 samples/livepatch/livepatch-shadow-fix2.c          |  34 +-
 tools/testing/selftests/Makefile                   |   1 +
 tools/testing/selftests/livepatch/Makefile         |   8 +
 tools/testing/selftests/livepatch/README           |  43 ++
 tools/testing/selftests/livepatch/config           |   1 +
 tools/testing/selftests/livepatch/functions.sh     | 203 +++++
 .../testing/selftests/livepatch/test-callbacks.sh  | 587 +++++++++++++++
 .../testing/selftests/livepatch/test-livepatch.sh  | 168 +++++
 .../selftests/livepatch/test-shadow-vars.sh        |  60 ++
 32 files changed, 2603 insertions(+), 984 deletions(-)
 create mode 100644 Documentation/livepatch/cumulative-patches.txt
 create mode 100644 lib/livepatch/Makefile
 create mode 100644 lib/livepatch/test_klp_atomic_replace.c
 create mode 100644 lib/livepatch/test_klp_callbacks_busy.c
 create mode 100644 lib/livepatch/test_klp_callbacks_demo.c
 create mode 100644 lib/livepatch/test_klp_callbacks_demo2.c
 create mode 100644 lib/livepatch/test_klp_callbacks_mod.c
 create mode 100644 lib/livepatch/test_klp_livepatch.c
 create mode 100644 lib/livepatch/test_klp_shadow_vars.c
 create mode 100644 tools/testing/selftests/livepatch/Makefile
 create mode 100644 tools/testing/selftests/livepatch/README
 create mode 100644 tools/testing/selftests/livepatch/config
 create mode 100644 tools/testing/selftests/livepatch/functions.sh
 create mode 100755 tools/testing/selftests/livepatch/test-callbacks.sh
 create mode 100755 tools/testing/selftests/livepatch/test-livepatch.sh
 create mode 100755 tools/testing/selftests/livepatch/test-shadow-vars.sh

-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v12 01/12] livepatch: Change void *new_func -> unsigned long new_addr in struct klp_func
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
@ 2018-08-28 14:35 ` Petr Mladek
  2018-08-31  8:37   ` Miroslav Benes
  2018-08-28 14:35 ` [PATCH v12 02/12] livepatch: Helper macros to define livepatch structures Petr Mladek
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:35 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel, Petr Mladek

The address of the to be patched function and new function is stored
in struct klp_func as:

	void *new_func;
	unsigned long old_addr;

The different naming scheme and type is derived from the way how
the addresses are set. @old_addr is assigned at runtime using
kallsyms-based search. @new_func is statically initialized,
for example:

  static struct klp_func funcs[] = {
	{
		.old_name = "cmdline_proc_show",
		.new_func = livepatch_cmdline_proc_show,
	}, { }
  };

This patch changes void *new_func -> unsigned long new_addr. It removes
some confusion when these address are later used in the code. It is
motivated by a followup patch that adds special NOP struct klp_func
where we want to assign func->new_func = func->old_addr respectively
func->new_addr = func->old_addr.

This patch does not modify the existing behavior.

IMPORTANT: This patch modifies ABI. The patches will need to use,
for example:

  static struct klp_func funcs[] = {
	{
		.old_name = "cmdline_proc_show",
		.new_addr = (unsigned long)livepatch_cmdline_proc_show,
	}, { }
  };

Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
---
 include/linux/livepatch.h                    | 6 +++---
 kernel/livepatch/core.c                      | 4 ++--
 kernel/livepatch/patch.c                     | 2 +-
 kernel/livepatch/transition.c                | 4 ++--
 samples/livepatch/livepatch-callbacks-demo.c | 2 +-
 samples/livepatch/livepatch-sample.c         | 2 +-
 samples/livepatch/livepatch-shadow-fix1.c    | 4 ++--
 samples/livepatch/livepatch-shadow-fix2.c    | 4 ++--
 8 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index aec44b1d9582..817a737b49e8 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -37,7 +37,7 @@
 /**
  * struct klp_func - function structure for live patching
  * @old_name:	name of the function to be patched
- * @new_func:	pointer to the patched function code
+ * @new_addr:	address of the new function (function pointer)
  * @old_sympos: a hint indicating which symbol position the old function
  *		can be found (optional)
  * @old_addr:	the address of the function being patched
@@ -66,7 +66,7 @@
 struct klp_func {
 	/* external */
 	const char *old_name;
-	void *new_func;
+	unsigned long new_addr;
 	/*
 	 * The old_sympos field is optional and can be used to resolve
 	 * duplicate symbol names in livepatch objects. If this field is zero,
@@ -157,7 +157,7 @@ struct klp_patch {
 
 #define klp_for_each_func(obj, func) \
 	for (func = obj->funcs; \
-	     func->old_name || func->new_func || func->old_sympos; \
+	     func->old_name || func->new_addr || func->old_sympos; \
 	     func++)
 
 int klp_register_patch(struct klp_patch *);
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 5b77a7314e01..577ebeb43024 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -675,7 +675,7 @@ static void klp_free_patch(struct klp_patch *patch)
 
 static int klp_init_func(struct klp_object *obj, struct klp_func *func)
 {
-	if (!func->old_name || !func->new_func)
+	if (!func->old_name || !func->new_addr)
 		return -EINVAL;
 
 	if (strlen(func->old_name) >= KSYM_NAME_LEN)
@@ -733,7 +733,7 @@ static int klp_init_object_loaded(struct klp_patch *patch,
 			return -ENOENT;
 		}
 
-		ret = kallsyms_lookup_size_offset((unsigned long)func->new_func,
+		ret = kallsyms_lookup_size_offset(func->new_addr,
 						  &func->new_size, NULL);
 		if (!ret) {
 			pr_err("kallsyms size lookup failed for '%s' replacement\n",
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 82d584225dc6..82927f59d3ff 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -118,7 +118,7 @@ static void notrace klp_ftrace_handler(unsigned long ip,
 		}
 	}
 
-	klp_arch_set_pc(regs, (unsigned long)func->new_func);
+	klp_arch_set_pc(regs, func->new_addr);
 unlock:
 	preempt_enable_notrace();
 }
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index 5bc349805e03..982a2e4c6120 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -217,7 +217,7 @@ static int klp_check_stack_func(struct klp_func *func,
 			  * Check for the to-be-unpatched function
 			  * (the func itself).
 			  */
-			func_addr = (unsigned long)func->new_func;
+			func_addr = func->new_addr;
 			func_size = func->new_size;
 		} else {
 			/*
@@ -235,7 +235,7 @@ static int klp_check_stack_func(struct klp_func *func,
 				struct klp_func *prev;
 
 				prev = list_next_entry(func, stack_node);
-				func_addr = (unsigned long)prev->new_func;
+				func_addr = prev->new_addr;
 				func_size = prev->new_size;
 			}
 		}
diff --git a/samples/livepatch/livepatch-callbacks-demo.c b/samples/livepatch/livepatch-callbacks-demo.c
index 72f9e6d1387b..4b1aec474bb7 100644
--- a/samples/livepatch/livepatch-callbacks-demo.c
+++ b/samples/livepatch/livepatch-callbacks-demo.c
@@ -153,7 +153,7 @@ static struct klp_func no_funcs[] = {
 static struct klp_func busymod_funcs[] = {
 	{
 		.old_name = "busymod_work_func",
-		.new_func = patched_work_func,
+		.new_addr = (unsigned long)patched_work_func,
 	}, { }
 };
 
diff --git a/samples/livepatch/livepatch-sample.c b/samples/livepatch/livepatch-sample.c
index 2d554dd930e2..e470a052fb77 100644
--- a/samples/livepatch/livepatch-sample.c
+++ b/samples/livepatch/livepatch-sample.c
@@ -51,7 +51,7 @@ static int livepatch_cmdline_proc_show(struct seq_file *m, void *v)
 static struct klp_func funcs[] = {
 	{
 		.old_name = "cmdline_proc_show",
-		.new_func = livepatch_cmdline_proc_show,
+		.new_addr = (unsigned long)livepatch_cmdline_proc_show,
 	}, { }
 };
 
diff --git a/samples/livepatch/livepatch-shadow-fix1.c b/samples/livepatch/livepatch-shadow-fix1.c
index 49b13553eaae..ede0de7abe40 100644
--- a/samples/livepatch/livepatch-shadow-fix1.c
+++ b/samples/livepatch/livepatch-shadow-fix1.c
@@ -130,11 +130,11 @@ void livepatch_fix1_dummy_free(struct dummy *d)
 static struct klp_func funcs[] = {
 	{
 		.old_name = "dummy_alloc",
-		.new_func = livepatch_fix1_dummy_alloc,
+		.new_addr = (unsigned long)livepatch_fix1_dummy_alloc,
 	},
 	{
 		.old_name = "dummy_free",
-		.new_func = livepatch_fix1_dummy_free,
+		.new_addr = (unsigned long)livepatch_fix1_dummy_free,
 	}, { }
 };
 
diff --git a/samples/livepatch/livepatch-shadow-fix2.c b/samples/livepatch/livepatch-shadow-fix2.c
index b34c7bf83356..035ee0ef387f 100644
--- a/samples/livepatch/livepatch-shadow-fix2.c
+++ b/samples/livepatch/livepatch-shadow-fix2.c
@@ -107,11 +107,11 @@ void livepatch_fix2_dummy_free(struct dummy *d)
 static struct klp_func funcs[] = {
 	{
 		.old_name = "dummy_check",
-		.new_func = livepatch_fix2_dummy_check,
+		.new_addr = (unsigned long)livepatch_fix2_dummy_check,
 	},
 	{
 		.old_name = "dummy_free",
-		.new_func = livepatch_fix2_dummy_free,
+		.new_addr = (unsigned long)livepatch_fix2_dummy_free,
 	}, { }
 };
 
-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v12 02/12] livepatch: Helper macros to define livepatch structures
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
  2018-08-28 14:35 ` [PATCH v12 01/12] livepatch: Change void *new_func -> unsigned long new_addr in struct klp_func Petr Mladek
@ 2018-08-28 14:35 ` Petr Mladek
  2018-08-28 14:35 ` [PATCH v12 03/12] livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code Petr Mladek
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:35 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel, Petr Mladek

The definition of struct klp_func might be a bit confusing.
The original function is defined by name as a string.
The new function is defined by name as a function pointer
casted to unsigned long.

This patch adds helper macros that hide the different types.
The functions are defined just by the name. For example:

static struct klp_func funcs[] = {
	{
		.old_name = "function_A",
		.new_addr = (unsigned long)livepatch_function_A,
	}, {
		.old_name = "function_B",
		.new_addr = (unsigned long)livepatch_function_B,
	}, { }
};

can be defined as:

static struct klp_func funcs[] = {
	KLP_FUNC(function_A,
		 livepatch_function_A),
	KLP_FUNC(function_B,
		 livepatch_function_B),
	KLP_FUNC_END
};

Just for completeness, this patch adds similar macros to define
struct klp_object. For example,

static struct klp_object objs[] = {
	{
		/* name being NULL means vmlinux */
		.funcs = funcs_vmlinux,
	}, {
		.name = "module_A",
		.funcs = funcs_module_A,
	}, {
		.name = "module_B",
		.funcs = funcs_module_B,
	}, { }
};

can be defined as:

static struct klp_object objs[] = {
	KLP_VMLINUX(funcs_vmlinux),
	KLP_OBJECT(module_A,
		   funcs_module_A),
	KLP_OBJECT(module_B,
		   funcs_module_B),
	KLP_OBJECT_END
};

Signed-off-by: Petr Mladek <pmladek@suse.com>
---
 include/linux/livepatch.h                    | 40 ++++++++++++++++++++
 samples/livepatch/livepatch-callbacks-demo.c | 55 +++++++++++-----------------
 samples/livepatch/livepatch-sample.c         | 13 +++----
 samples/livepatch/livepatch-shadow-fix1.c    | 20 ++++------
 samples/livepatch/livepatch-shadow-fix2.c    | 20 ++++------
 5 files changed, 83 insertions(+), 65 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 817a737b49e8..1163742b27c0 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -152,6 +152,46 @@ struct klp_patch {
 	struct completion finish;
 };
 
+#define KLP_FUNC(_old_func, _new_func) {			\
+		.old_name = #_old_func,				\
+		.new_addr = (unsigned long)(_new_func),		\
+	}
+#define KLP_FUNC_POS(_old_func, _new_func, _sympos) {		\
+		.old_name = #_old_func,				\
+		.new_addr = (unsigned long)_new_func,		\
+		.sympos = _sympos,				\
+	}
+#define KLP_FUNC_END { }
+
+#define KLP_OBJECT(_obj, _funcs) {				\
+		.name = #_obj,					\
+		.funcs = _funcs,				\
+	}
+#define KLP_OBJECT_CALLBACKS(_obj, _funcs,			\
+			     _pre_patch, _post_patch,		\
+			     _pre_unpatch, _post_unpatch) {	\
+		.name = #_obj,					\
+		.funcs = _funcs,				\
+		.callbacks.pre_patch = _pre_patch,		\
+		.callbacks.post_patch = _post_patch,		\
+		.callbacks.pre_unpatch = _pre_unpatch,		\
+		.callbacks.post_unpatch = _post_unpatch,	\
+	}
+/* name being NULL means vmlinux */
+#define KLP_VMLINUX(_funcs) {					\
+		.funcs = _funcs,				\
+	}
+#define KLP_VMLINUX_CALLBACKS(_funcs,				\
+			     _pre_patch, _post_patch,		\
+			     _pre_unpatch, _post_unpatch) {	\
+		.funcs = _funcs,				\
+		.callbacks.pre_patch = _pre_patch,		\
+		.callbacks.post_patch = _post_patch,		\
+		.callbacks.pre_unpatch = _pre_unpatch,		\
+		.callbacks.post_unpatch = _post_unpatch,	\
+	}
+#define KLP_OBJECT_END { }
+
 #define klp_for_each_object(patch, obj) \
 	for (obj = patch->objs; obj->funcs || obj->name; obj++)
 
diff --git a/samples/livepatch/livepatch-callbacks-demo.c b/samples/livepatch/livepatch-callbacks-demo.c
index 4b1aec474bb7..001a0c672251 100644
--- a/samples/livepatch/livepatch-callbacks-demo.c
+++ b/samples/livepatch/livepatch-callbacks-demo.c
@@ -147,45 +147,34 @@ static void patched_work_func(struct work_struct *work)
 }
 
 static struct klp_func no_funcs[] = {
-	{ }
+	KLP_FUNC_END
 };
 
 static struct klp_func busymod_funcs[] = {
-	{
-		.old_name = "busymod_work_func",
-		.new_addr = (unsigned long)patched_work_func,
-	}, { }
+	KLP_FUNC(busymod_work_func,
+		 patched_work_func),
+	KLP_FUNC_END
 };
 
 static struct klp_object objs[] = {
-	{
-		.name = NULL,	/* vmlinux */
-		.funcs = no_funcs,
-		.callbacks = {
-			.pre_patch = pre_patch_callback,
-			.post_patch = post_patch_callback,
-			.pre_unpatch = pre_unpatch_callback,
-			.post_unpatch = post_unpatch_callback,
-		},
-	},	{
-		.name = "livepatch_callbacks_mod",
-		.funcs = no_funcs,
-		.callbacks = {
-			.pre_patch = pre_patch_callback,
-			.post_patch = post_patch_callback,
-			.pre_unpatch = pre_unpatch_callback,
-			.post_unpatch = post_unpatch_callback,
-		},
-	},	{
-		.name = "livepatch_callbacks_busymod",
-		.funcs = busymod_funcs,
-		.callbacks = {
-			.pre_patch = pre_patch_callback,
-			.post_patch = post_patch_callback,
-			.pre_unpatch = pre_unpatch_callback,
-			.post_unpatch = post_unpatch_callback,
-		},
-	}, { }
+	KLP_VMLINUX_CALLBACKS(no_funcs,
+			      pre_patch_callback,
+			      post_patch_callback,
+			      pre_unpatch_callback,
+			      post_unpatch_callback),
+	KLP_OBJECT_CALLBACKS(livepatch_callbacks_mod,
+			     no_funcs,
+			     pre_patch_callback,
+			     post_patch_callback,
+			     pre_unpatch_callback,
+			     post_unpatch_callback),
+	KLP_OBJECT_CALLBACKS(livepatch_callbacks_busymod,
+			     busymod_funcs,
+			     pre_patch_callback,
+			     post_patch_callback,
+			     pre_unpatch_callback,
+			     post_unpatch_callback),
+	KLP_OBJECT_END
 };
 
 static struct klp_patch patch = {
diff --git a/samples/livepatch/livepatch-sample.c b/samples/livepatch/livepatch-sample.c
index e470a052fb77..de30d1ba4791 100644
--- a/samples/livepatch/livepatch-sample.c
+++ b/samples/livepatch/livepatch-sample.c
@@ -49,17 +49,14 @@ static int livepatch_cmdline_proc_show(struct seq_file *m, void *v)
 }
 
 static struct klp_func funcs[] = {
-	{
-		.old_name = "cmdline_proc_show",
-		.new_addr = (unsigned long)livepatch_cmdline_proc_show,
-	}, { }
+	KLP_FUNC(cmdline_proc_show,
+		 livepatch_cmdline_proc_show),
+	KLP_FUNC_END
 };
 
 static struct klp_object objs[] = {
-	{
-		/* name being NULL means vmlinux */
-		.funcs = funcs,
-	}, { }
+	KLP_VMLINUX(funcs),
+	KLP_OBJECT_END
 };
 
 static struct klp_patch patch = {
diff --git a/samples/livepatch/livepatch-shadow-fix1.c b/samples/livepatch/livepatch-shadow-fix1.c
index ede0de7abe40..8f337b4a9108 100644
--- a/samples/livepatch/livepatch-shadow-fix1.c
+++ b/samples/livepatch/livepatch-shadow-fix1.c
@@ -128,21 +128,17 @@ void livepatch_fix1_dummy_free(struct dummy *d)
 }
 
 static struct klp_func funcs[] = {
-	{
-		.old_name = "dummy_alloc",
-		.new_addr = (unsigned long)livepatch_fix1_dummy_alloc,
-	},
-	{
-		.old_name = "dummy_free",
-		.new_addr = (unsigned long)livepatch_fix1_dummy_free,
-	}, { }
+	KLP_FUNC(dummy_alloc,
+		 livepatch_fix1_dummy_alloc),
+	KLP_FUNC(dummy_free,
+		 livepatch_fix1_dummy_free),
+	KLP_FUNC_END
 };
 
 static struct klp_object objs[] = {
-	{
-		.name = "livepatch_shadow_mod",
-		.funcs = funcs,
-	}, { }
+	KLP_OBJECT(livepatch_shadow_mod,
+		   funcs),
+	KLP_OBJECT_END
 };
 
 static struct klp_patch patch = {
diff --git a/samples/livepatch/livepatch-shadow-fix2.c b/samples/livepatch/livepatch-shadow-fix2.c
index 035ee0ef387f..e8c0c0467bc0 100644
--- a/samples/livepatch/livepatch-shadow-fix2.c
+++ b/samples/livepatch/livepatch-shadow-fix2.c
@@ -105,21 +105,17 @@ void livepatch_fix2_dummy_free(struct dummy *d)
 }
 
 static struct klp_func funcs[] = {
-	{
-		.old_name = "dummy_check",
-		.new_addr = (unsigned long)livepatch_fix2_dummy_check,
-	},
-	{
-		.old_name = "dummy_free",
-		.new_addr = (unsigned long)livepatch_fix2_dummy_free,
-	}, { }
+	KLP_FUNC(dummy_check,
+		 livepatch_fix2_dummy_check),
+	KLP_FUNC(dummy_free,
+		 livepatch_fix2_dummy_free),
+	KLP_FUNC_END
 };
 
 static struct klp_object objs[] = {
-	{
-		.name = "livepatch_shadow_mod",
-		.funcs = funcs,
-	}, { }
+	KLP_OBJECT(livepatch_shadow_mod,
+		   funcs),
+	KLP_OBJECT_END
 };
 
 static struct klp_patch patch = {
-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v12 03/12] livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
  2018-08-28 14:35 ` [PATCH v12 01/12] livepatch: Change void *new_func -> unsigned long new_addr in struct klp_func Petr Mladek
  2018-08-28 14:35 ` [PATCH v12 02/12] livepatch: Helper macros to define livepatch structures Petr Mladek
@ 2018-08-28 14:35 ` Petr Mladek
  2018-08-31  8:38   ` Miroslav Benes
  2018-08-28 14:35 ` [PATCH v12 04/12] livepatch: Consolidate klp_free functions Petr Mladek
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:35 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel, Petr Mladek

We are going to simplify the API and code by removing the registration
step. This would require calling init/free functions from enable/disable
ones.

This patch just moves the code the code to prevent more forward
declarations.

This patch does not change the code except of two forward declarations.

Signed-off-by: Petr Mladek <pmladek@suse.com>
---
 kernel/livepatch/core.c | 330 ++++++++++++++++++++++++------------------------
 1 file changed, 166 insertions(+), 164 deletions(-)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 577ebeb43024..b3956cce239e 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -278,170 +278,6 @@ static int klp_write_object_relocations(struct module *pmod,
 	return ret;
 }
 
-static int __klp_disable_patch(struct klp_patch *patch)
-{
-	struct klp_object *obj;
-
-	if (WARN_ON(!patch->enabled))
-		return -EINVAL;
-
-	if (klp_transition_patch)
-		return -EBUSY;
-
-	/* enforce stacking: only the last enabled patch can be disabled */
-	if (!list_is_last(&patch->list, &klp_patches) &&
-	    list_next_entry(patch, list)->enabled)
-		return -EBUSY;
-
-	klp_init_transition(patch, KLP_UNPATCHED);
-
-	klp_for_each_object(patch, obj)
-		if (obj->patched)
-			klp_pre_unpatch_callback(obj);
-
-	/*
-	 * Enforce the order of the func->transition writes in
-	 * klp_init_transition() and the TIF_PATCH_PENDING writes in
-	 * klp_start_transition().  In the rare case where klp_ftrace_handler()
-	 * is called shortly after klp_update_patch_state() switches the task,
-	 * this ensures the handler sees that func->transition is set.
-	 */
-	smp_wmb();
-
-	klp_start_transition();
-	klp_try_complete_transition();
-	patch->enabled = false;
-
-	return 0;
-}
-
-/**
- * klp_disable_patch() - disables a registered patch
- * @patch:	The registered, enabled patch to be disabled
- *
- * Unregisters the patched functions from ftrace.
- *
- * Return: 0 on success, otherwise error
- */
-int klp_disable_patch(struct klp_patch *patch)
-{
-	int ret;
-
-	mutex_lock(&klp_mutex);
-
-	if (!klp_is_patch_registered(patch)) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	if (!patch->enabled) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	ret = __klp_disable_patch(patch);
-
-err:
-	mutex_unlock(&klp_mutex);
-	return ret;
-}
-EXPORT_SYMBOL_GPL(klp_disable_patch);
-
-static int __klp_enable_patch(struct klp_patch *patch)
-{
-	struct klp_object *obj;
-	int ret;
-
-	if (klp_transition_patch)
-		return -EBUSY;
-
-	if (WARN_ON(patch->enabled))
-		return -EINVAL;
-
-	/* enforce stacking: only the first disabled patch can be enabled */
-	if (patch->list.prev != &klp_patches &&
-	    !list_prev_entry(patch, list)->enabled)
-		return -EBUSY;
-
-	/*
-	 * A reference is taken on the patch module to prevent it from being
-	 * unloaded.
-	 */
-	if (!try_module_get(patch->mod))
-		return -ENODEV;
-
-	pr_notice("enabling patch '%s'\n", patch->mod->name);
-
-	klp_init_transition(patch, KLP_PATCHED);
-
-	/*
-	 * Enforce the order of the func->transition writes in
-	 * klp_init_transition() and the ops->func_stack writes in
-	 * klp_patch_object(), so that klp_ftrace_handler() will see the
-	 * func->transition updates before the handler is registered and the
-	 * new funcs become visible to the handler.
-	 */
-	smp_wmb();
-
-	klp_for_each_object(patch, obj) {
-		if (!klp_is_object_loaded(obj))
-			continue;
-
-		ret = klp_pre_patch_callback(obj);
-		if (ret) {
-			pr_warn("pre-patch callback failed for object '%s'\n",
-				klp_is_module(obj) ? obj->name : "vmlinux");
-			goto err;
-		}
-
-		ret = klp_patch_object(obj);
-		if (ret) {
-			pr_warn("failed to patch object '%s'\n",
-				klp_is_module(obj) ? obj->name : "vmlinux");
-			goto err;
-		}
-	}
-
-	klp_start_transition();
-	klp_try_complete_transition();
-	patch->enabled = true;
-
-	return 0;
-err:
-	pr_warn("failed to enable patch '%s'\n", patch->mod->name);
-
-	klp_cancel_transition();
-	return ret;
-}
-
-/**
- * klp_enable_patch() - enables a registered patch
- * @patch:	The registered, disabled patch to be enabled
- *
- * Performs the needed symbol lookups and code relocations,
- * then registers the patched functions with ftrace.
- *
- * Return: 0 on success, otherwise error
- */
-int klp_enable_patch(struct klp_patch *patch)
-{
-	int ret;
-
-	mutex_lock(&klp_mutex);
-
-	if (!klp_is_patch_registered(patch)) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	ret = __klp_enable_patch(patch);
-
-err:
-	mutex_unlock(&klp_mutex);
-	return ret;
-}
-EXPORT_SYMBOL_GPL(klp_enable_patch);
-
 /*
  * Sysfs Interface
  *
@@ -454,6 +290,8 @@ EXPORT_SYMBOL_GPL(klp_enable_patch);
  * /sys/kernel/livepatch/<patch>/<object>
  * /sys/kernel/livepatch/<patch>/<object>/<function,sympos>
  */
+static int __klp_disable_patch(struct klp_patch *patch);
+static int __klp_enable_patch(struct klp_patch *patch);
 
 static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
 			     const char *buf, size_t count)
@@ -904,6 +742,170 @@ int klp_register_patch(struct klp_patch *patch)
 }
 EXPORT_SYMBOL_GPL(klp_register_patch);
 
+static int __klp_disable_patch(struct klp_patch *patch)
+{
+	struct klp_object *obj;
+
+	if (WARN_ON(!patch->enabled))
+		return -EINVAL;
+
+	if (klp_transition_patch)
+		return -EBUSY;
+
+	/* enforce stacking: only the last enabled patch can be disabled */
+	if (!list_is_last(&patch->list, &klp_patches) &&
+	    list_next_entry(patch, list)->enabled)
+		return -EBUSY;
+
+	klp_init_transition(patch, KLP_UNPATCHED);
+
+	klp_for_each_object(patch, obj)
+		if (obj->patched)
+			klp_pre_unpatch_callback(obj);
+
+	/*
+	 * Enforce the order of the func->transition writes in
+	 * klp_init_transition() and the TIF_PATCH_PENDING writes in
+	 * klp_start_transition().  In the rare case where klp_ftrace_handler()
+	 * is called shortly after klp_update_patch_state() switches the task,
+	 * this ensures the handler sees that func->transition is set.
+	 */
+	smp_wmb();
+
+	klp_start_transition();
+	klp_try_complete_transition();
+	patch->enabled = false;
+
+	return 0;
+}
+
+/**
+ * klp_disable_patch() - disables a registered patch
+ * @patch:	The registered, enabled patch to be disabled
+ *
+ * Unregisters the patched functions from ftrace.
+ *
+ * Return: 0 on success, otherwise error
+ */
+int klp_disable_patch(struct klp_patch *patch)
+{
+	int ret;
+
+	mutex_lock(&klp_mutex);
+
+	if (!klp_is_patch_registered(patch)) {
+		ret = -EINVAL;
+		goto err;
+	}
+
+	if (!patch->enabled) {
+		ret = -EINVAL;
+		goto err;
+	}
+
+	ret = __klp_disable_patch(patch);
+
+err:
+	mutex_unlock(&klp_mutex);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(klp_disable_patch);
+
+static int __klp_enable_patch(struct klp_patch *patch)
+{
+	struct klp_object *obj;
+	int ret;
+
+	if (klp_transition_patch)
+		return -EBUSY;
+
+	if (WARN_ON(patch->enabled))
+		return -EINVAL;
+
+	/* enforce stacking: only the first disabled patch can be enabled */
+	if (patch->list.prev != &klp_patches &&
+	    !list_prev_entry(patch, list)->enabled)
+		return -EBUSY;
+
+	/*
+	 * A reference is taken on the patch module to prevent it from being
+	 * unloaded.
+	 */
+	if (!try_module_get(patch->mod))
+		return -ENODEV;
+
+	pr_notice("enabling patch '%s'\n", patch->mod->name);
+
+	klp_init_transition(patch, KLP_PATCHED);
+
+	/*
+	 * Enforce the order of the func->transition writes in
+	 * klp_init_transition() and the ops->func_stack writes in
+	 * klp_patch_object(), so that klp_ftrace_handler() will see the
+	 * func->transition updates before the handler is registered and the
+	 * new funcs become visible to the handler.
+	 */
+	smp_wmb();
+
+	klp_for_each_object(patch, obj) {
+		if (!klp_is_object_loaded(obj))
+			continue;
+
+		ret = klp_pre_patch_callback(obj);
+		if (ret) {
+			pr_warn("pre-patch callback failed for object '%s'\n",
+				klp_is_module(obj) ? obj->name : "vmlinux");
+			goto err;
+		}
+
+		ret = klp_patch_object(obj);
+		if (ret) {
+			pr_warn("failed to patch object '%s'\n",
+				klp_is_module(obj) ? obj->name : "vmlinux");
+			goto err;
+		}
+	}
+
+	klp_start_transition();
+	klp_try_complete_transition();
+	patch->enabled = true;
+
+	return 0;
+err:
+	pr_warn("failed to enable patch '%s'\n", patch->mod->name);
+
+	klp_cancel_transition();
+	return ret;
+}
+
+/**
+ * klp_enable_patch() - enables a registered patch
+ * @patch:	The registered, disabled patch to be enabled
+ *
+ * Performs the needed symbol lookups and code relocations,
+ * then registers the patched functions with ftrace.
+ *
+ * Return: 0 on success, otherwise error
+ */
+int klp_enable_patch(struct klp_patch *patch)
+{
+	int ret;
+
+	mutex_lock(&klp_mutex);
+
+	if (!klp_is_patch_registered(patch)) {
+		ret = -EINVAL;
+		goto err;
+	}
+
+	ret = __klp_enable_patch(patch);
+
+err:
+	mutex_unlock(&klp_mutex);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(klp_enable_patch);
+
 /*
  * Remove parts of patches that touch a given kernel module. The list of
  * patches processed might be limited. When limit is NULL, all patches
-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v12 04/12] livepatch: Consolidate klp_free functions
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
                   ` (2 preceding siblings ...)
  2018-08-28 14:35 ` [PATCH v12 03/12] livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code Petr Mladek
@ 2018-08-28 14:35 ` Petr Mladek
  2018-08-31 10:39   ` Miroslav Benes
  2018-08-28 14:35 ` [PATCH v12 05/12] livepatch: Refuse to unload only livepatches available during a forced transition Petr Mladek
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:35 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel, Petr Mladek

The code for freeing livepatch structures is a bit scattered and tricky:

  + direct calls to klp_free_*_limited() and kobject_put() are
    used to release partially initialized objects

  + klp_free_patch() removes the patch from the public list
    and releases all objects except for patch->kobj

  + object_put(&patch->kobj) and the related wait_for_completion()
    are called directly outside klp_mutex; this code is duplicated;

Now, we are going to remove the registration stage to simplify the API
and the code. This would require handling more situations in
klp_enable_patch() error paths.

More importantly, we are going to add a feature called atomic replace.
It will need to dynamically create func and object structures. We will
want to reuse the existing init() and free() functions. This would
create even more error path scenarios.

This patch implements a more clever free functions:

  + checks kobj.state_initialized instead of @limit

  + initializes patch->list early so that the check for empty list
    always works

  + The action(s) that has to be done outside klp_mutex are done
    in separate klp_free_patch_end() function. It waits only
    when patch->kobj was really released via the _begin() part.

Note that it is safe to put patch->kobj under klp_mutex. It calls
the release callback only when the reference count reaches zero.
Therefore it does not block any related sysfs operation that took
a reference and might eventually wait for klp_mutex.

Note that __klp_free_patch() is split because it will be later
used in a _nowait() variant. Also klp_free_patch_end() makes
sense because it will later get more complicated.

This patch does not change the existing behavior.

Signed-off-by: Petr Mladek <pmladek@suse.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Jason Baron <jbaron@akamai.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
---
 include/linux/livepatch.h |  2 ++
 kernel/livepatch/core.c   | 92 +++++++++++++++++++++++++++++------------------
 2 files changed, 59 insertions(+), 35 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 1163742b27c0..22e0767d64b0 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -138,6 +138,7 @@ struct klp_object {
  * @list:	list node for global list of registered patches
  * @kobj:	kobject for sysfs resources
  * @enabled:	the patch is enabled (but operation may be incomplete)
+ * @wait_free:	wait until the patch is freed
  * @finish:	for waiting till it is safe to remove the patch module
  */
 struct klp_patch {
@@ -149,6 +150,7 @@ struct klp_patch {
 	struct list_head list;
 	struct kobject kobj;
 	bool enabled;
+	bool wait_free;
 	struct completion finish;
 };
 
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index b3956cce239e..3ca404545150 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -465,17 +465,15 @@ static struct kobj_type klp_ktype_func = {
 	.sysfs_ops = &kobj_sysfs_ops,
 };
 
-/*
- * Free all functions' kobjects in the array up to some limit. When limit is
- * NULL, all kobjects are freed.
- */
-static void klp_free_funcs_limited(struct klp_object *obj,
-				   struct klp_func *limit)
+static void klp_free_funcs(struct klp_object *obj)
 {
 	struct klp_func *func;
 
-	for (func = obj->funcs; func->old_name && func != limit; func++)
-		kobject_put(&func->kobj);
+	klp_for_each_func(obj, func) {
+		/* Might be called from klp_init_patch() error path. */
+		if (func->kobj.state_initialized)
+			kobject_put(&func->kobj);
+	}
 }
 
 /* Clean up when a patched object is unloaded */
@@ -489,26 +487,59 @@ static void klp_free_object_loaded(struct klp_object *obj)
 		func->old_addr = 0;
 }
 
-/*
- * Free all objects' kobjects in the array up to some limit. When limit is
- * NULL, all kobjects are freed.
- */
-static void klp_free_objects_limited(struct klp_patch *patch,
-				     struct klp_object *limit)
+static void klp_free_objects(struct klp_patch *patch)
 {
 	struct klp_object *obj;
 
-	for (obj = patch->objs; obj->funcs && obj != limit; obj++) {
-		klp_free_funcs_limited(obj, NULL);
-		kobject_put(&obj->kobj);
+	klp_for_each_object(patch, obj) {
+		klp_free_funcs(obj);
+
+		/* Might be called from klp_init_patch() error path. */
+		if (obj->kobj.state_initialized)
+			kobject_put(&obj->kobj);
 	}
 }
 
-static void klp_free_patch(struct klp_patch *patch)
+static void __klp_free_patch(struct klp_patch *patch)
 {
-	klp_free_objects_limited(patch, NULL);
 	if (!list_empty(&patch->list))
 		list_del(&patch->list);
+
+	klp_free_objects(patch);
+
+	if (patch->kobj.state_initialized)
+		kobject_put(&patch->kobj);
+}
+
+/*
+ * Some operations are synchronized by klp_mutex, e.g. the access to
+ * klp_patches list. But the caller has to wait for patch->kobj release
+ * callback outside the lock. Otherwise, there might be a deadlock with
+ * sysfs operations waiting on klp_mutex.
+ *
+ * This function implements the free part that has to be called under
+ * klp_mutex.
+ */
+static void klp_free_patch_wait_prepare(struct klp_patch *patch)
+{
+	/* Can be called in error paths before patch->kobj is initialized. */
+	if (patch->kobj.state_initialized)
+		patch->wait_free = true;
+	else
+		patch->wait_free = false;
+
+	__klp_free_patch(patch);
+}
+
+/*
+ * This function implements the free part that must be called outside
+ * klp_mutex.
+ */
+static void klp_free_patch_wait(struct klp_patch *patch)
+{
+	/* Wait only when patch->kobj was initialized */
+	if (patch->wait_free)
+		wait_for_completion(&patch->finish);
 }
 
 static int klp_init_func(struct klp_object *obj, struct klp_func *func)
@@ -609,20 +640,12 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
 	klp_for_each_func(obj, func) {
 		ret = klp_init_func(obj, func);
 		if (ret)
-			goto free;
+			return ret;
 	}
 
-	if (klp_is_object_loaded(obj)) {
+	if (klp_is_object_loaded(obj))
 		ret = klp_init_object_loaded(patch, obj);
-		if (ret)
-			goto free;
-	}
 
-	return 0;
-
-free:
-	klp_free_funcs_limited(obj, func);
-	kobject_put(&obj->kobj);
 	return ret;
 }
 
@@ -637,6 +660,7 @@ static int klp_init_patch(struct klp_patch *patch)
 	mutex_lock(&klp_mutex);
 
 	patch->enabled = false;
+	INIT_LIST_HEAD(&patch->list);
 	init_completion(&patch->finish);
 
 	ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
@@ -659,12 +683,11 @@ static int klp_init_patch(struct klp_patch *patch)
 	return 0;
 
 free:
-	klp_free_objects_limited(patch, obj);
+	klp_free_patch_wait_prepare(patch);
 
 	mutex_unlock(&klp_mutex);
 
-	kobject_put(&patch->kobj);
-	wait_for_completion(&patch->finish);
+	klp_free_patch_wait(patch);
 
 	return ret;
 }
@@ -693,12 +716,11 @@ int klp_unregister_patch(struct klp_patch *patch)
 		goto err;
 	}
 
-	klp_free_patch(patch);
+	klp_free_patch_wait_prepare(patch);
 
 	mutex_unlock(&klp_mutex);
 
-	kobject_put(&patch->kobj);
-	wait_for_completion(&patch->finish);
+	klp_free_patch_wait(patch);
 
 	return 0;
 err:
-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v12 05/12] livepatch: Refuse to unload only livepatches available during a forced transition
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
                   ` (3 preceding siblings ...)
  2018-08-28 14:35 ` [PATCH v12 04/12] livepatch: Consolidate klp_free functions Petr Mladek
@ 2018-08-28 14:35 ` Petr Mladek
  2018-08-28 14:35 ` [PATCH v12 06/12] livepatch: Simplify API by removing registration step Petr Mladek
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:35 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel, Petr Mladek

module_put() is currently never called in klp_complete_transition() when
klp_force is set. As a result, we might keep the reference count even when
klp_enable_patch() fails and klp_cancel_transition() is called.

This might make an assumption that a module might get blocked in some
strange init state. Fortunately, it is not the case. The reference count
is ignored when mod->init fails and erroneous modules are always removed.

Anyway, this might make some confusion. Instead, this patch moves
the global klp_forced flag into struct klp_patch. As a result,
we block only modules that might still be in use after a forced
transition. Newly loaded livepatches might be eventually completely
removed later.

It is not a big deal. But the code is at least consistent with
the reality.

Signed-off-by: Petr Mladek <pmladek@suse.com>
---
 include/linux/livepatch.h     |  2 ++
 kernel/livepatch/core.c       |  4 +++-
 kernel/livepatch/core.h       |  1 +
 kernel/livepatch/transition.c | 10 +++++-----
 4 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 22e0767d64b0..86b484b39326 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -138,6 +138,7 @@ struct klp_object {
  * @list:	list node for global list of registered patches
  * @kobj:	kobject for sysfs resources
  * @enabled:	the patch is enabled (but operation may be incomplete)
+ * @forced:	was involved in a forced transition
  * @wait_free:	wait until the patch is freed
  * @finish:	for waiting till it is safe to remove the patch module
  */
@@ -150,6 +151,7 @@ struct klp_patch {
 	struct list_head list;
 	struct kobject kobj;
 	bool enabled;
+	bool forced;
 	bool wait_free;
 	struct completion finish;
 };
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 3ca404545150..18af1dc0e199 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -45,7 +45,8 @@
  */
 DEFINE_MUTEX(klp_mutex);
 
-static LIST_HEAD(klp_patches);
+/* Registered patches */
+LIST_HEAD(klp_patches);
 
 static struct kobject *klp_root_kobj;
 
@@ -660,6 +661,7 @@ static int klp_init_patch(struct klp_patch *patch)
 	mutex_lock(&klp_mutex);
 
 	patch->enabled = false;
+	patch->forced = false;
 	INIT_LIST_HEAD(&patch->list);
 	init_completion(&patch->finish);
 
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index 48a83d4364cf..d0cb5390e247 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -5,6 +5,7 @@
 #include <linux/livepatch.h>
 
 extern struct mutex klp_mutex;
+extern struct list_head klp_patches;
 
 static inline bool klp_is_object_loaded(struct klp_object *obj)
 {
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index 982a2e4c6120..30a28634c88c 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -33,8 +33,6 @@ struct klp_patch *klp_transition_patch;
 
 static int klp_target_state = KLP_UNDEFINED;
 
-static bool klp_forced = false;
-
 /*
  * This work can be performed periodically to finish patching or unpatching any
  * "straggler" tasks which failed to transition in the first attempt.
@@ -137,10 +135,10 @@ static void klp_complete_transition(void)
 		  klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
 
 	/*
-	 * klp_forced set implies unbounded increase of module's ref count if
+	 * patch->forced set implies unbounded increase of module's ref count if
 	 * the module is disabled/enabled in a loop.
 	 */
-	if (!klp_forced && klp_target_state == KLP_UNPATCHED)
+	if (!klp_transition_patch->forced && klp_target_state == KLP_UNPATCHED)
 		module_put(klp_transition_patch->mod);
 
 	klp_target_state = KLP_UNDEFINED;
@@ -620,6 +618,7 @@ void klp_send_signals(void)
  */
 void klp_force_transition(void)
 {
+	struct klp_patch *patch;
 	struct task_struct *g, *task;
 	unsigned int cpu;
 
@@ -633,5 +632,6 @@ void klp_force_transition(void)
 	for_each_possible_cpu(cpu)
 		klp_update_patch_state(idle_task(cpu));
 
-	klp_forced = true;
+	list_for_each_entry(patch, &klp_patches, list)
+		patch->forced = true;
 }
-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v12 06/12] livepatch: Simplify API by removing registration step
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
                   ` (4 preceding siblings ...)
  2018-08-28 14:35 ` [PATCH v12 05/12] livepatch: Refuse to unload only livepatches available during a forced transition Petr Mladek
@ 2018-08-28 14:35 ` Petr Mladek
  2018-09-05  9:34   ` Miroslav Benes
  2018-08-28 14:35 ` [PATCH v12 07/12] livepatch: Use lists to manage patches, objects and functions Petr Mladek
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:35 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel, Petr Mladek

The possibility to re-enable a registered patch was useful for immediate
patches where the livepatch module had to stay until the system reboot.
The improved consistency model allows to achieve the same result by
unloading and loading the livepatch module again.

Also we are going to add a feature called atomic replace. It will allow
to create a patch that would replace all already registered patches. The
aim is to handle dependent patches a more secure way. It will obsolete
the stack of patches that helped to handle the dependencies so far.
Then it might be unclear when a cumulative patch re-enabling is safe.

It would be complicated to support the many modes. Instead we could
actually make the API and code easier.

This patch removes the two step public API. All the checks and init calls
are moved from klp_register_patch() to klp_enabled_patch(). Also the patch
is automatically freed, including the sysfs interface when the transition
to the disabled state is completed.

As a result, there is newer a disabled patch on the top of the stack.
Therefore we do not need to check the stack in __klp_enable_patch().
And we could simplify the check in __klp_disable_patch().

Also the API and logic is much easier. It is enough to call
klp_enable_patch() in module_init() call. The patch patch can be disabled
by writing '0' into /sys/kernel/livepatch/<patch>/enabled. Then the module
can be removed once the transition finishes and sysfs interface is freed.

IMPORTANT: We only need to be really careful about when and where to call
module_put(). It has to be called only when:

   + the reference was taken before
   + the module structures and code will not longer be accessed

Now, the disable operation is triggered from the sysfs interface. We clearly
could not wait there until the interface is destroyed. Instead we need to call
module_put() in the release callback of patch->kobj. It is safe because:

  + Patch could not longer get re-enabled from enabled_store().

  + kobjects are designed to be part of structures that are freed from
    the release callback. We just need to make sure that module_put()
    is the last call accessing the patch in the callback.

In theory, we could be more relaxed in klp_enable_patch() error paths
because they are called from the module_init(). But better be on the
safe side.

This patch does the following to keep the code sane:

  + patch->forced is replaced with patch->module_put and an inverted logic.
    Then we could call it in klp_enable_patch() error path even before
    the reference is taken.

  + try_module_get() is called before initializing patch->kobj. It makes
    it more symmetric with the moved module_put().

  + module_put() is the last action also in klp_free_patch_sync_end(). It makes
    it safe for an use outside module_init().

Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
---
 Documentation/livepatch/livepatch.txt        | 121 +++++-------
 include/linux/livepatch.h                    |   7 +-
 kernel/livepatch/core.c                      | 280 ++++++++++-----------------
 kernel/livepatch/core.h                      |   2 +
 kernel/livepatch/transition.c                |  15 +-
 samples/livepatch/livepatch-callbacks-demo.c |  13 +-
 samples/livepatch/livepatch-sample.c         |  13 +-
 samples/livepatch/livepatch-shadow-fix1.c    |  14 +-
 samples/livepatch/livepatch-shadow-fix2.c    |  14 +-
 9 files changed, 157 insertions(+), 322 deletions(-)

diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt
index 2d7ed09dbd59..7fb01d27d81d 100644
--- a/Documentation/livepatch/livepatch.txt
+++ b/Documentation/livepatch/livepatch.txt
@@ -14,10 +14,8 @@ Table of Contents:
    4.2. Metadata
    4.3. Livepatch module handling
 5. Livepatch life-cycle
-   5.1. Registration
-   5.2. Enabling
-   5.3. Disabling
-   5.4. Unregistration
+   5.1. Enabling
+   5.2. Disabling
 6. Sysfs
 7. Limitations
 
@@ -303,9 +301,8 @@ into three levels:
 
 The usual behavior is that the new functions will get used when
 the livepatch module is loaded. For this, the module init() function
-has to register the patch (struct klp_patch) and enable it. See the
-section "Livepatch life-cycle" below for more details about these
-two operations.
+has to enable the patch (struct klp_patch). See the section "Livepatch
+life-cycle" below for more details.
 
 Module removal is only safe when there are no users of the underlying
 functions. This is the reason why the force feature permanently disables
@@ -319,96 +316,66 @@ forced it is guaranteed that no task sleeps or runs in the old code.
 5. Livepatch life-cycle
 =======================
 
-Livepatching defines four basic operations that define the life cycle of each
-live patch: registration, enabling, disabling and unregistration.  There are
-several reasons why it is done this way.
+Livepatches get automatically enabled when the respective module is loaded.
+On the other hand, the module can be removed only after the patch was
+successfully disabled via the sysfs interface.
 
-First, the patch is applied only when all patched symbols for already
-loaded objects are found. The error handling is much easier if this
-check is done before particular functions get redirected.
 
-Second, it might take some time until the entire system is migrated with
-the hybrid consistency model being used. The patch revert might block
-the livepatch module removal for too long. Therefore it is useful to
-revert the patch using a separate operation that might be called
-explicitly. But it does not make sense to remove all information until
-the livepatch module is really removed.
-
-
-5.1. Registration
------------------
-
-Each patch first has to be registered using klp_register_patch(). This makes
-the patch known to the livepatch framework. Also it does some preliminary
-computing and checks.
-
-In particular, the patch is added into the list of known patches. The
-addresses of the patched functions are found according to their names.
-The special relocations, mentioned in the section "New functions", are
-applied. The relevant entries are created under
-/sys/kernel/livepatch/<name>. The patch is rejected when any operation
-fails.
-
-
-5.2. Enabling
+5.1. Enabling
 -------------
 
-Registered patches might be enabled either by calling klp_enable_patch() or
-by writing '1' to /sys/kernel/livepatch/<name>/enabled. The system will
-start using the new implementation of the patched functions at this stage.
+Livepatch modules have to call klp_enable_patch() in module_init() callback.
+This function is rather complex and might even fail in the early phase.
 
-When a patch is enabled, livepatch enters into a transition state where
-tasks are converging to the patched state.  This is indicated by a value
-of '1' in /sys/kernel/livepatch/<name>/transition.  Once all tasks have
-been patched, the 'transition' value changes to '0'.  For more
-information about this process, see the "Consistency model" section.
+First, the addresses of the patched functions are found according to their
+names. The special relocations, mentioned in the section "New functions",
+are applied. The relevant entries are created under
+/sys/kernel/livepatch/<name>. The patch is rejected when any above
+operation fails.
 
-If an original function is patched for the first time, a function
-specific struct klp_ops is created and an universal ftrace handler is
-registered.
+Third, livepatch enters into a transition state where tasks are converging
+to the patched state. If an original function is patched for the first
+time, a function specific struct klp_ops is created and an universal
+ftrace handler is registered[*]. This stage is indicated by a value of '1'
+in /sys/kernel/livepatch/<name>/transition. For more information about
+this process, see the "Consistency model" section.
 
-Functions might be patched multiple times. The ftrace handler is registered
-only once for the given function. Further patches just add an entry to the
-list (see field `func_stack`) of the struct klp_ops. The last added
-entry is chosen by the ftrace handler and becomes the active function
-replacement.
+Finally, once all tasks have been patched, the 'transition' value changes
+to '0'.
 
-Note that the patches might be enabled in a different order than they were
-registered.
+[*] Note that functions might be patched multiple times. The ftrace handler
+    is registered only once for a given function. Further patches just add
+    an entry to the list (see field `func_stack`) of the struct klp_ops.
+    The right implementation is selected by the ftrace handler, see
+    the "Consistency model" section.
 
 
-5.3. Disabling
+5.2. Disabling
 --------------
 
-Enabled patches might get disabled either by calling klp_disable_patch() or
-by writing '0' to /sys/kernel/livepatch/<name>/enabled. At this stage
-either the code from the previously enabled patch or even the original
-code gets used.
+Enabled patches might get disabled by writing '0' to
+/sys/kernel/livepatch/<name>/enabled.
 
-When a patch is disabled, livepatch enters into a transition state where
-tasks are converging to the unpatched state.  This is indicated by a
-value of '1' in /sys/kernel/livepatch/<name>/transition.  Once all tasks
-have been unpatched, the 'transition' value changes to '0'.  For more
-information about this process, see the "Consistency model" section.
+First, livepatch enters into a transition state where tasks are converging
+to the unpatched state. The system starts using either the code from
+the previously enabled patch or even the original one. This stage is
+indicated by a value of '1' in /sys/kernel/livepatch/<name>/transition.
+For more information about this process, see the "Consistency model"
+section.
 
-Here all the functions (struct klp_func) associated with the to-be-disabled
+Second, once all tasks have been unpatched, the 'transition' value changes
+to '0'. All the functions (struct klp_func) associated with the to-be-disabled
 patch are removed from the corresponding struct klp_ops. The ftrace handler
 is unregistered and the struct klp_ops is freed when the func_stack list
 becomes empty.
 
-Patches must be disabled in exactly the reverse order in which they were
-enabled. It makes the problem and the implementation much easier.
-
-
-5.4. Unregistration
--------------------
+Third, the sysfs interface is destroyed.
 
-Disabled patches might be unregistered by calling klp_unregister_patch().
-This can be done only when the patch is disabled and the code is no longer
-used. It must be called before the livepatch module gets unloaded.
+Finally, the module can be removed if the transition was not forced and the
+last sysfs entry has gone.
 
-At this stage, all the relevant sys-fs entries are removed and the patch
-is removed from the list of known patches.
+Note that patches must be disabled in exactly the reverse order in which
+they were enabled. It makes the problem and the implementation much easier.
 
 
 6. Sysfs
diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 86b484b39326..b4424ef7e0ce 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -138,8 +138,8 @@ struct klp_object {
  * @list:	list node for global list of registered patches
  * @kobj:	kobject for sysfs resources
  * @enabled:	the patch is enabled (but operation may be incomplete)
- * @forced:	was involved in a forced transition
  * @wait_free:	wait until the patch is freed
+ * @module_put: module reference taken and patch not forced
  * @finish:	for waiting till it is safe to remove the patch module
  */
 struct klp_patch {
@@ -151,8 +151,8 @@ struct klp_patch {
 	struct list_head list;
 	struct kobject kobj;
 	bool enabled;
-	bool forced;
 	bool wait_free;
+	bool module_put;
 	struct completion finish;
 };
 
@@ -204,10 +204,7 @@ struct klp_patch {
 	     func->old_name || func->new_addr || func->old_sympos; \
 	     func++)
 
-int klp_register_patch(struct klp_patch *);
-int klp_unregister_patch(struct klp_patch *);
 int klp_enable_patch(struct klp_patch *);
-int klp_disable_patch(struct klp_patch *);
 
 void arch_klp_init_object_loaded(struct klp_patch *patch,
 				 struct klp_object *obj);
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 18af1dc0e199..6a47b36a6c9a 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -45,7 +45,7 @@
  */
 DEFINE_MUTEX(klp_mutex);
 
-/* Registered patches */
+/* Actively used patches. */
 LIST_HEAD(klp_patches);
 
 static struct kobject *klp_root_kobj;
@@ -83,17 +83,6 @@ static void klp_find_object_module(struct klp_object *obj)
 	mutex_unlock(&module_mutex);
 }
 
-static bool klp_is_patch_registered(struct klp_patch *patch)
-{
-	struct klp_patch *mypatch;
-
-	list_for_each_entry(mypatch, &klp_patches, list)
-		if (mypatch == patch)
-			return true;
-
-	return false;
-}
-
 static bool klp_initialized(void)
 {
 	return !!klp_root_kobj;
@@ -292,7 +281,6 @@ static int klp_write_object_relocations(struct module *pmod,
  * /sys/kernel/livepatch/<patch>/<object>/<function,sympos>
  */
 static int __klp_disable_patch(struct klp_patch *patch);
-static int __klp_enable_patch(struct klp_patch *patch);
 
 static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
 			     const char *buf, size_t count)
@@ -309,40 +297,33 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
 
 	mutex_lock(&klp_mutex);
 
-	if (!klp_is_patch_registered(patch)) {
-		/*
-		 * Module with the patch could either disappear meanwhile or is
-		 * not properly initialized yet.
-		 */
-		ret = -EINVAL;
-		goto err;
-	}
-
 	if (patch->enabled == enabled) {
 		/* already in requested state */
 		ret = -EINVAL;
-		goto err;
+		goto out;
 	}
 
-	if (patch == klp_transition_patch) {
+	/*
+	 * Allow to reverse a pending transition in both ways. It might be
+	 * necessary to complete the transition without forcing and breaking
+	 * the system integrity.
+	 *
+	 * Do not allow to re-enable a disabled patch because this interface
+	 * is being destroyed.
+	 */
+	if (patch == klp_transition_patch)
 		klp_reverse_transition();
-	} else if (enabled) {
-		ret = __klp_enable_patch(patch);
-		if (ret)
-			goto err;
-	} else {
+	else if (!enabled)
 		ret = __klp_disable_patch(patch);
-		if (ret)
-			goto err;
-	}
+	else
+		ret = -EINVAL;
 
+out:
 	mutex_unlock(&klp_mutex);
 
+	if (ret)
+		return ret;
 	return count;
-
-err:
-	mutex_unlock(&klp_mutex);
-	return ret;
 }
 
 static ssize_t enabled_show(struct kobject *kobj,
@@ -439,7 +420,12 @@ static void klp_kobj_release_patch(struct kobject *kobj)
 	struct klp_patch *patch;
 
 	patch = container_of(kobj, struct klp_patch, kobj);
-	complete(&patch->finish);
+
+	/* module_put() has to be the last call accessing the livepatch! */
+	if (patch->wait_free)
+		complete(&patch->finish);
+	else if (patch->module_put)
+		module_put(patch->mod);
 }
 
 static struct kobj_type klp_ktype_patch = {
@@ -513,6 +499,21 @@ static void __klp_free_patch(struct klp_patch *patch)
 }
 
 /*
+ * Asynchonous variant is useful when there the patch is disabled
+ * via sysfs interface, see enabled_store(). The module is put
+ * from patch->kobj() release callback.
+ */
+void klp_free_patch_nowait(struct klp_patch *patch)
+{
+	patch->wait_free = false;
+
+	__klp_free_patch(patch);
+}
+
+/*
+ * The synchronous variant is needed when the patch is freed in
+ * the klp_enable_patch() error paths.
+ *
  * Some operations are synchronized by klp_mutex, e.g. the access to
  * klp_patches list. But the caller has to wait for patch->kobj release
  * callback outside the lock. Otherwise, there might be a deadlock with
@@ -541,6 +542,10 @@ static void klp_free_patch_wait(struct klp_patch *patch)
 	/* Wait only when patch->kobj was initialized */
 	if (patch->wait_free)
 		wait_for_completion(&patch->finish);
+
+	/* Put the module after the last access to struct klp_patch. */
+	if (patch->module_put)
+		module_put(patch->mod);
 }
 
 static int klp_init_func(struct klp_object *obj, struct klp_func *func)
@@ -655,116 +660,38 @@ static int klp_init_patch(struct klp_patch *patch)
 	struct klp_object *obj;
 	int ret;
 
-	if (!patch->objs)
-		return -EINVAL;
-
-	mutex_lock(&klp_mutex);
-
 	patch->enabled = false;
-	patch->forced = false;
+	patch->module_put = false;
 	INIT_LIST_HEAD(&patch->list);
 	init_completion(&patch->finish);
 
+	if (!patch->objs)
+		return -EINVAL;
+
+	/*
+	 * A reference is taken on the patch module to prevent it from being
+	 * unloaded.
+	 */
+	if (!try_module_get(patch->mod))
+		return -ENODEV;
+	patch->module_put = true;
+
 	ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
 				   klp_root_kobj, "%s", patch->mod->name);
 	if (ret) {
-		mutex_unlock(&klp_mutex);
 		return ret;
 	}
 
 	klp_for_each_object(patch, obj) {
 		ret = klp_init_object(patch, obj);
 		if (ret)
-			goto free;
+			return ret;
 	}
 
 	list_add_tail(&patch->list, &klp_patches);
 
-	mutex_unlock(&klp_mutex);
-
-	return 0;
-
-free:
-	klp_free_patch_wait_prepare(patch);
-
-	mutex_unlock(&klp_mutex);
-
-	klp_free_patch_wait(patch);
-
-	return ret;
-}
-
-/**
- * klp_unregister_patch() - unregisters a patch
- * @patch:	Disabled patch to be unregistered
- *
- * Frees the data structures and removes the sysfs interface.
- *
- * Return: 0 on success, otherwise error
- */
-int klp_unregister_patch(struct klp_patch *patch)
-{
-	int ret;
-
-	mutex_lock(&klp_mutex);
-
-	if (!klp_is_patch_registered(patch)) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	if (patch->enabled) {
-		ret = -EBUSY;
-		goto err;
-	}
-
-	klp_free_patch_wait_prepare(patch);
-
-	mutex_unlock(&klp_mutex);
-
-	klp_free_patch_wait(patch);
-
 	return 0;
-err:
-	mutex_unlock(&klp_mutex);
-	return ret;
 }
-EXPORT_SYMBOL_GPL(klp_unregister_patch);
-
-/**
- * klp_register_patch() - registers a patch
- * @patch:	Patch to be registered
- *
- * Initializes the data structure associated with the patch and
- * creates the sysfs interface.
- *
- * There is no need to take the reference on the patch module here. It is done
- * later when the patch is enabled.
- *
- * Return: 0 on success, otherwise error
- */
-int klp_register_patch(struct klp_patch *patch)
-{
-	if (!patch || !patch->mod)
-		return -EINVAL;
-
-	if (!is_livepatch_module(patch->mod)) {
-		pr_err("module %s is not marked as a livepatch module\n",
-		       patch->mod->name);
-		return -EINVAL;
-	}
-
-	if (!klp_initialized())
-		return -ENODEV;
-
-	if (!klp_have_reliable_stack()) {
-		pr_err("This architecture doesn't have support for the livepatch consistency model.\n");
-		return -ENOSYS;
-	}
-
-	return klp_init_patch(patch);
-}
-EXPORT_SYMBOL_GPL(klp_register_patch);
 
 static int __klp_disable_patch(struct klp_patch *patch)
 {
@@ -777,8 +704,7 @@ static int __klp_disable_patch(struct klp_patch *patch)
 		return -EBUSY;
 
 	/* enforce stacking: only the last enabled patch can be disabled */
-	if (!list_is_last(&patch->list, &klp_patches) &&
-	    list_next_entry(patch, list)->enabled)
+	if (!list_is_last(&patch->list, &klp_patches))
 		return -EBUSY;
 
 	klp_init_transition(patch, KLP_UNPATCHED);
@@ -797,44 +723,12 @@ static int __klp_disable_patch(struct klp_patch *patch)
 	smp_wmb();
 
 	klp_start_transition();
-	klp_try_complete_transition();
 	patch->enabled = false;
+	klp_try_complete_transition();
 
 	return 0;
 }
 
-/**
- * klp_disable_patch() - disables a registered patch
- * @patch:	The registered, enabled patch to be disabled
- *
- * Unregisters the patched functions from ftrace.
- *
- * Return: 0 on success, otherwise error
- */
-int klp_disable_patch(struct klp_patch *patch)
-{
-	int ret;
-
-	mutex_lock(&klp_mutex);
-
-	if (!klp_is_patch_registered(patch)) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	if (!patch->enabled) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	ret = __klp_disable_patch(patch);
-
-err:
-	mutex_unlock(&klp_mutex);
-	return ret;
-}
-EXPORT_SYMBOL_GPL(klp_disable_patch);
-
 static int __klp_enable_patch(struct klp_patch *patch)
 {
 	struct klp_object *obj;
@@ -846,17 +740,8 @@ static int __klp_enable_patch(struct klp_patch *patch)
 	if (WARN_ON(patch->enabled))
 		return -EINVAL;
 
-	/* enforce stacking: only the first disabled patch can be enabled */
-	if (patch->list.prev != &klp_patches &&
-	    !list_prev_entry(patch, list)->enabled)
-		return -EBUSY;
-
-	/*
-	 * A reference is taken on the patch module to prevent it from being
-	 * unloaded.
-	 */
-	if (!try_module_get(patch->mod))
-		return -ENODEV;
+	if (!patch->kobj.state_initialized)
+		return -EINVAL;
 
 	pr_notice("enabling patch '%s'\n", patch->mod->name);
 
@@ -891,8 +776,8 @@ static int __klp_enable_patch(struct klp_patch *patch)
 	}
 
 	klp_start_transition();
-	klp_try_complete_transition();
 	patch->enabled = true;
+	klp_try_complete_transition();
 
 	return 0;
 err:
@@ -903,11 +788,15 @@ static int __klp_enable_patch(struct klp_patch *patch)
 }
 
 /**
- * klp_enable_patch() - enables a registered patch
- * @patch:	The registered, disabled patch to be enabled
+ * klp_enable_patch() - enable the livepatch
+ * @patch:	patch to be enabled
  *
- * Performs the needed symbol lookups and code relocations,
- * then registers the patched functions with ftrace.
+ * Initializes the data structure associated with the patch, creates the sysfs
+ * interface, performs the needed symbol lookups and code relocations,
+ * registers the patched functions with ftrace.
+ *
+ * This function is supposed to be called from the livepatch module_init()
+ * callback.
  *
  * Return: 0 on success, otherwise error
  */
@@ -915,17 +804,44 @@ int klp_enable_patch(struct klp_patch *patch)
 {
 	int ret;
 
+	if (!patch || !patch->mod)
+		return -EINVAL;
+
+	if (!is_livepatch_module(patch->mod)) {
+		pr_err("module %s is not marked as a livepatch module\n",
+		       patch->mod->name);
+		return -EINVAL;
+	}
+
+	if (!klp_initialized())
+		return -ENODEV;
+
+	if (!klp_have_reliable_stack()) {
+		pr_err("This architecture doesn't have support for the livepatch consistency model.\n");
+		return -ENOSYS;
+	}
+
 	mutex_lock(&klp_mutex);
 
-	if (!klp_is_patch_registered(patch)) {
-		ret = -EINVAL;
+	ret = klp_init_patch(patch);
+	if (ret)
 		goto err;
-	}
 
 	ret = __klp_enable_patch(patch);
+	if (ret)
+		goto err;
+
+	mutex_unlock(&klp_mutex);
+
+	return 0;
 
 err:
+	klp_free_patch_wait_prepare(patch);
+
 	mutex_unlock(&klp_mutex);
+
+	klp_free_patch_wait(patch);
+
 	return ret;
 }
 EXPORT_SYMBOL_GPL(klp_enable_patch);
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index d0cb5390e247..d53b3ec83114 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -7,6 +7,8 @@
 extern struct mutex klp_mutex;
 extern struct list_head klp_patches;
 
+void klp_free_patch_nowait(struct klp_patch *patch);
+
 static inline bool klp_is_object_loaded(struct klp_object *obj)
 {
 	return !obj->name || obj->mod;
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index 30a28634c88c..d716757aa539 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -134,13 +134,6 @@ static void klp_complete_transition(void)
 	pr_notice("'%s': %s complete\n", klp_transition_patch->mod->name,
 		  klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
 
-	/*
-	 * patch->forced set implies unbounded increase of module's ref count if
-	 * the module is disabled/enabled in a loop.
-	 */
-	if (!klp_transition_patch->forced && klp_target_state == KLP_UNPATCHED)
-		module_put(klp_transition_patch->mod);
-
 	klp_target_state = KLP_UNDEFINED;
 	klp_transition_patch = NULL;
 }
@@ -357,6 +350,7 @@ void klp_try_complete_transition(void)
 {
 	unsigned int cpu;
 	struct task_struct *g, *task;
+	struct klp_patch *patch;
 	bool complete = true;
 
 	WARN_ON_ONCE(klp_target_state == KLP_UNDEFINED);
@@ -405,7 +399,11 @@ void klp_try_complete_transition(void)
 	}
 
 	/* we're done, now cleanup the data structures */
+	patch = klp_transition_patch;
 	klp_complete_transition();
+
+	if (!patch->enabled)
+		klp_free_patch_nowait(patch);
 }
 
 /*
@@ -632,6 +630,7 @@ void klp_force_transition(void)
 	for_each_possible_cpu(cpu)
 		klp_update_patch_state(idle_task(cpu));
 
+	/* Refuse unloading all livepatches. The code might be in use. */
 	list_for_each_entry(patch, &klp_patches, list)
-		patch->forced = true;
+		patch->module_put = false;
 }
diff --git a/samples/livepatch/livepatch-callbacks-demo.c b/samples/livepatch/livepatch-callbacks-demo.c
index 001a0c672251..4264f3862313 100644
--- a/samples/livepatch/livepatch-callbacks-demo.c
+++ b/samples/livepatch/livepatch-callbacks-demo.c
@@ -184,22 +184,11 @@ static struct klp_patch patch = {
 
 static int livepatch_callbacks_demo_init(void)
 {
-	int ret;
-
-	ret = klp_register_patch(&patch);
-	if (ret)
-		return ret;
-	ret = klp_enable_patch(&patch);
-	if (ret) {
-		WARN_ON(klp_unregister_patch(&patch));
-		return ret;
-	}
-	return 0;
+	return klp_enable_patch(&patch);
 }
 
 static void livepatch_callbacks_demo_exit(void)
 {
-	WARN_ON(klp_unregister_patch(&patch));
 }
 
 module_init(livepatch_callbacks_demo_init);
diff --git a/samples/livepatch/livepatch-sample.c b/samples/livepatch/livepatch-sample.c
index de30d1ba4791..88afb708a48d 100644
--- a/samples/livepatch/livepatch-sample.c
+++ b/samples/livepatch/livepatch-sample.c
@@ -66,22 +66,11 @@ static struct klp_patch patch = {
 
 static int livepatch_init(void)
 {
-	int ret;
-
-	ret = klp_register_patch(&patch);
-	if (ret)
-		return ret;
-	ret = klp_enable_patch(&patch);
-	if (ret) {
-		WARN_ON(klp_unregister_patch(&patch));
-		return ret;
-	}
-	return 0;
+	return klp_enable_patch(&patch);
 }
 
 static void livepatch_exit(void)
 {
-	WARN_ON(klp_unregister_patch(&patch));
 }
 
 module_init(livepatch_init);
diff --git a/samples/livepatch/livepatch-shadow-fix1.c b/samples/livepatch/livepatch-shadow-fix1.c
index 8f337b4a9108..c3053f6a93e9 100644
--- a/samples/livepatch/livepatch-shadow-fix1.c
+++ b/samples/livepatch/livepatch-shadow-fix1.c
@@ -148,25 +148,13 @@ static struct klp_patch patch = {
 
 static int livepatch_shadow_fix1_init(void)
 {
-	int ret;
-
-	ret = klp_register_patch(&patch);
-	if (ret)
-		return ret;
-	ret = klp_enable_patch(&patch);
-	if (ret) {
-		WARN_ON(klp_unregister_patch(&patch));
-		return ret;
-	}
-	return 0;
+	return klp_enable_patch(&patch);
 }
 
 static void livepatch_shadow_fix1_exit(void)
 {
 	/* Cleanup any existing SV_LEAK shadow variables */
 	klp_shadow_free_all(SV_LEAK, livepatch_fix1_dummy_leak_dtor);
-
-	WARN_ON(klp_unregister_patch(&patch));
 }
 
 module_init(livepatch_shadow_fix1_init);
diff --git a/samples/livepatch/livepatch-shadow-fix2.c b/samples/livepatch/livepatch-shadow-fix2.c
index e8c0c0467bc0..fbde6cb5c68e 100644
--- a/samples/livepatch/livepatch-shadow-fix2.c
+++ b/samples/livepatch/livepatch-shadow-fix2.c
@@ -125,25 +125,13 @@ static struct klp_patch patch = {
 
 static int livepatch_shadow_fix2_init(void)
 {
-	int ret;
-
-	ret = klp_register_patch(&patch);
-	if (ret)
-		return ret;
-	ret = klp_enable_patch(&patch);
-	if (ret) {
-		WARN_ON(klp_unregister_patch(&patch));
-		return ret;
-	}
-	return 0;
+	return klp_enable_patch(&patch);
 }
 
 static void livepatch_shadow_fix2_exit(void)
 {
 	/* Cleanup any existing SV_COUNTER shadow variables */
 	klp_shadow_free_all(SV_COUNTER, NULL);
-
-	WARN_ON(klp_unregister_patch(&patch));
 }
 
 module_init(livepatch_shadow_fix2_init);
-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v12 07/12] livepatch: Use lists to manage patches, objects and functions
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
                   ` (5 preceding siblings ...)
  2018-08-28 14:35 ` [PATCH v12 06/12] livepatch: Simplify API by removing registration step Petr Mladek
@ 2018-08-28 14:35 ` Petr Mladek
  2018-09-03 16:00   ` Miroslav Benes
  2018-08-28 14:35 ` [PATCH v12 08/12] livepatch: Add atomic replace Petr Mladek
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:35 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel, Petr Mladek

From: Jason Baron <jbaron@akamai.com>

Currently klp_patch contains a pointer to a statically allocated array of
struct klp_object and struct klp_objects contains a pointer to a statically
allocated array of klp_func. In order to allow for the dynamic allocation
of objects and functions, link klp_patch, klp_object, and klp_func together
via linked lists. This allows us to more easily allocate new objects and
functions, while having the iterator be a simple linked list walk.

The static structures are added to the lists early. It allows to add
the dynamically allocated objects before klp_init_object() and
klp_init_func() calls. Therefore it reduces the further changes
to the code.

This patch does not change the existing behavior.

Signed-off-by: Jason Baron <jbaron@akamai.com>
[pmladek@suse.com: Initialize lists before init calls]
Signed-off-by: Petr Mladek <pmladek@suse.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Miroslav Benes <mbenes@suse.cz>
---
 include/linux/livepatch.h | 19 +++++++++++++++++--
 kernel/livepatch/core.c   | 16 ++++++++++++++++
 2 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index b4424ef7e0ce..e48a4917fee3 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -24,6 +24,7 @@
 #include <linux/module.h>
 #include <linux/ftrace.h>
 #include <linux/completion.h>
+#include <linux/list.h>
 
 #if IS_ENABLED(CONFIG_LIVEPATCH)
 
@@ -42,6 +43,7 @@
  *		can be found (optional)
  * @old_addr:	the address of the function being patched
  * @kobj:	kobject for sysfs resources
+ * @node:	list node for klp_object func_list
  * @stack_node:	list node for klp_ops func_stack list
  * @old_size:	size of the old function
  * @new_size:	size of the new function
@@ -79,6 +81,7 @@ struct klp_func {
 	/* internal */
 	unsigned long old_addr;
 	struct kobject kobj;
+	struct list_head node;
 	struct list_head stack_node;
 	unsigned long old_size, new_size;
 	bool patched;
@@ -117,6 +120,8 @@ struct klp_callbacks {
  * @kobj:	kobject for sysfs resources
  * @mod:	kernel module associated with the patched object
  *		(NULL for vmlinux)
+ * @func_list:	dynamic list of the function entries
+ * @node:	list node for klp_patch obj_list
  * @patched:	the object's funcs have been added to the klp_ops list
  */
 struct klp_object {
@@ -127,6 +132,8 @@ struct klp_object {
 
 	/* internal */
 	struct kobject kobj;
+	struct list_head func_list;
+	struct list_head node;
 	struct module *mod;
 	bool patched;
 };
@@ -137,6 +144,7 @@ struct klp_object {
  * @objs:	object entries for kernel objects to be patched
  * @list:	list node for global list of registered patches
  * @kobj:	kobject for sysfs resources
+ * @obj_list:	dynamic list of the object entries
  * @enabled:	the patch is enabled (but operation may be incomplete)
  * @wait_free:	wait until the patch is freed
  * @module_put: module reference taken and patch not forced
@@ -150,6 +158,7 @@ struct klp_patch {
 	/* internal */
 	struct list_head list;
 	struct kobject kobj;
+	struct list_head obj_list;
 	bool enabled;
 	bool wait_free;
 	bool module_put;
@@ -196,14 +205,20 @@ struct klp_patch {
 	}
 #define KLP_OBJECT_END { }
 
-#define klp_for_each_object(patch, obj) \
+#define klp_for_each_object_static(patch, obj) \
 	for (obj = patch->objs; obj->funcs || obj->name; obj++)
 
-#define klp_for_each_func(obj, func) \
+#define klp_for_each_object(patch, obj)	\
+	list_for_each_entry(obj, &patch->obj_list, node)
+
+#define klp_for_each_func_static(obj, func) \
 	for (func = obj->funcs; \
 	     func->old_name || func->new_addr || func->old_sympos; \
 	     func++)
 
+#define klp_for_each_func(obj, func)	\
+	list_for_each_entry(func, &obj->func_list, node)
+
 int klp_enable_patch(struct klp_patch *);
 
 void arch_klp_init_object_loaded(struct klp_patch *patch,
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 6a47b36a6c9a..7bc23a106b5b 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -50,6 +50,21 @@ LIST_HEAD(klp_patches);
 
 static struct kobject *klp_root_kobj;
 
+static void klp_init_lists(struct klp_patch *patch)
+{
+	struct klp_object *obj;
+	struct klp_func *func;
+
+	INIT_LIST_HEAD(&patch->obj_list);
+	klp_for_each_object_static(patch, obj) {
+		list_add(&obj->node, &patch->obj_list);
+
+		INIT_LIST_HEAD(&obj->func_list);
+		klp_for_each_func_static(obj, func)
+			list_add(&func->node, &obj->func_list);
+	}
+}
+
 static bool klp_is_module(struct klp_object *obj)
 {
 	return obj->name;
@@ -664,6 +679,7 @@ static int klp_init_patch(struct klp_patch *patch)
 	patch->module_put = false;
 	INIT_LIST_HEAD(&patch->list);
 	init_completion(&patch->finish);
+	klp_init_lists(patch);
 
 	if (!patch->objs)
 		return -EINVAL;
-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v12 08/12] livepatch: Add atomic replace
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
                   ` (6 preceding siblings ...)
  2018-08-28 14:35 ` [PATCH v12 07/12] livepatch: Use lists to manage patches, objects and functions Petr Mladek
@ 2018-08-28 14:35 ` Petr Mladek
  2018-08-28 14:36 ` [PATCH v12 09/12] livepatch: Remove Nop structures when unused Petr Mladek
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:35 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel, Petr Mladek

From: Jason Baron <jbaron@akamai.com>

Sometimes we would like to revert a particular fix. Currently, this
is not easy because we want to keep all other fixes active and we
could revert only the last applied patch.

One solution would be to apply new patch that implemented all
the reverted functions like in the original code. It would work
as expected but there will be unnecessary redirections. In addition,
it would also require knowing which functions need to be reverted at
build time.

Another problem is when there are many patches that touch the same
functions. There might be dependencies between patches that are
not enforced on the kernel side. Also it might be pretty hard to
actually prepare the patch and ensure compatibility with the other
patches.

Atomic replace && cumulative patches:

A better solution would be to create cumulative patch and say that
it replaces all older ones.

This patch adds a new "replace" flag to struct klp_patch. When it is
enabled, a set of 'nop' klp_func will be dynamically created for all
functions that are already being patched but that will no longer be
modified by the new patch. They are used as a new target during
the patch transition.

The idea is to handle Nops' structures like the static ones. When
the dynamic structures are allocated, we initialize all values that
are normally statically defined.

The only exception is "new_addr" in struct klp_func. It has to point
to the original function and the address is known only when the object
(module) is loaded. Note that we really need to set it. The address is
used, for example, in klp_check_stack_func().

Nevertheless we still need to distinguish the dynamically allocated
structures in some operations. For this, we add "nop" flag into
struct klp_func and "dynamic" flag into struct klp_object. They
need special handling in the following situations:

  + The structures are added into the lists of objects and functions
    immediately. In fact, the lists were created for this purpose.

  + The address of the original function is known only when the patched
    object (module) is loaded. Therefore it is copied later in
    klp_init_object_loaded().

  + The ftrace handler must not set PC to func->new_addr. It would cause
    infinite loop because the address points back to the beginning of
    the original function.

  + The various free() functions must free the structure itself.

Note that other ways to detect the dynamic structures are not considered
safe. For example, even the statically defined struct klp_object might
include empty funcs array. It might be there just to run some callbacks.

Special callbacks handling:

The callbacks from the replaced patches are _not_ called by intention.
It would be pretty hard to define a reasonable semantic and implement it.

It might even be counter-productive. The new patch is cumulative. It is
supposed to include most of the changes from older patches. In most cases,
it will not want to call pre_unpatch() post_unpatch() callbacks from
the replaced patches. It would disable/break things for no good reasons.
Also it should be easier to handle various scenarios in a single script
in the new patch than think about interactions caused by running many
scripts from older patches. Not to say that the old scripts even would
not expect to be called in this situation.

Removing replaced patches:

One nice effect of the cumulative patches is that the code from the
older patches is no longer used. Therefore the replaced patches can
be removed. It has several advantages:

  + Nops' structs will not longer be necessary and might be removed.
    This would save memory, restore performance (no ftrace handler),
    allow clear view on what is really patched.

  + Disabling the patch will cause using the original code everywhere.
    Therefore the livepatch callbacks could handle only one scenario.
    Note that the complication is already complex enough when the patch
    gets enabled. It is currently solved by calling callbacks only from
    the new cumulative patch.

  + The state is clean in both the sysfs interface and lsmod. The modules
    with the replaced livepatches might even get removed from the system.

Some people actually expected this behavior from the beginning. After all
a cumulative patch is supposed to "completely" replace an existing one.
It is like when a new version of an application replaces an older one.

This patch does the first step. It removes the replaced patches from
the list of patches. It is safe. The consistency model ensures that
they are not longer used. By other words, each process works only with
the structures from klp_transition_patch.

The removal is done by a special function. It combines actions done by
__disable_patch() and klp_complete_transition(). But it is a fast
track without all the transaction-related stuff.

Signed-off-by: Jason Baron <jbaron@akamai.com>
[pmladek@suse.com: Split, reuse existing code, simplified]
Signed-off-by: Petr Mladek <pmladek@suse.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Miroslav Benes <mbenes@suse.cz>
---
 include/linux/livepatch.h     |   6 ++
 kernel/livepatch/core.c       | 221 +++++++++++++++++++++++++++++++++++++++++-
 kernel/livepatch/core.h       |   1 +
 kernel/livepatch/patch.c      |   8 ++
 kernel/livepatch/transition.c |   3 +
 5 files changed, 236 insertions(+), 3 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index e48a4917fee3..97c3f366cf18 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -47,6 +47,7 @@
  * @stack_node:	list node for klp_ops func_stack list
  * @old_size:	size of the old function
  * @new_size:	size of the new function
+ * @nop:        temporary patch to use the original code again; dyn. allocated
  * @patched:	the func has been added to the klp_ops list
  * @transition:	the func is currently being applied or reverted
  *
@@ -84,6 +85,7 @@ struct klp_func {
 	struct list_head node;
 	struct list_head stack_node;
 	unsigned long old_size, new_size;
+	bool nop;
 	bool patched;
 	bool transition;
 };
@@ -122,6 +124,7 @@ struct klp_callbacks {
  *		(NULL for vmlinux)
  * @func_list:	dynamic list of the function entries
  * @node:	list node for klp_patch obj_list
+ * @dynamic:    temporary object for nop functions; dynamically allocated
  * @patched:	the object's funcs have been added to the klp_ops list
  */
 struct klp_object {
@@ -135,6 +138,7 @@ struct klp_object {
 	struct list_head func_list;
 	struct list_head node;
 	struct module *mod;
+	bool dynamic;
 	bool patched;
 };
 
@@ -142,6 +146,7 @@ struct klp_object {
  * struct klp_patch - patch structure for live patching
  * @mod:	reference to the live patch module
  * @objs:	object entries for kernel objects to be patched
+ * @replace:	replace all already registered patches
  * @list:	list node for global list of registered patches
  * @kobj:	kobject for sysfs resources
  * @obj_list:	dynamic list of the object entries
@@ -154,6 +159,7 @@ struct klp_patch {
 	/* external */
 	struct module *mod;
 	struct klp_object *objs;
+	bool replace;
 
 	/* internal */
 	struct list_head list;
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 7bc23a106b5b..db12c86c4f26 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -103,6 +103,40 @@ static bool klp_initialized(void)
 	return !!klp_root_kobj;
 }
 
+static struct klp_func *klp_find_func(struct klp_object *obj,
+				      struct klp_func *old_func)
+{
+	struct klp_func *func;
+
+	klp_for_each_func(obj, func) {
+		if ((strcmp(old_func->old_name, func->old_name) == 0) &&
+		    (old_func->old_sympos == func->old_sympos)) {
+			return func;
+		}
+	}
+
+	return NULL;
+}
+
+static struct klp_object *klp_find_object(struct klp_patch *patch,
+					  struct klp_object *old_obj)
+{
+	struct klp_object *obj;
+
+	klp_for_each_object(patch, obj) {
+		if (klp_is_module(old_obj)) {
+			if (klp_is_module(obj) &&
+			    strcmp(old_obj->name, obj->name) == 0) {
+				return obj;
+			}
+		} else if (!klp_is_module(obj)) {
+			return obj;
+		}
+	}
+
+	return NULL;
+}
+
 struct klp_find_arg {
 	const char *objname;
 	const char *name;
@@ -430,6 +464,123 @@ static struct attribute *klp_patch_attrs[] = {
 	NULL
 };
 
+/*
+ * Dynamically allocated objects and functions.
+ */
+static void klp_free_object_dynamic(struct klp_object *obj)
+{
+	kfree(obj->name);
+	kfree(obj);
+}
+
+static struct klp_object *klp_alloc_object_dynamic(const char *name)
+{
+	struct klp_object *obj;
+
+	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+	if (!obj)
+		return NULL;
+
+	if (name) {
+		obj->name = kstrdup(name, GFP_KERNEL);
+		if (!obj->name) {
+			kfree(obj);
+			return NULL;
+		}
+	}
+
+	INIT_LIST_HEAD(&obj->func_list);
+	obj->dynamic = true;
+
+	return obj;
+}
+
+static void klp_free_func_nop(struct klp_func *func)
+{
+	kfree(func->old_name);
+	kfree(func);
+}
+
+static struct klp_func *klp_alloc_func_nop(struct klp_func *old_func,
+					   struct klp_object *obj)
+{
+	struct klp_func *func;
+
+	func = kzalloc(sizeof(*func), GFP_KERNEL);
+	if (!func)
+		return NULL;
+
+	if (old_func->old_name) {
+		func->old_name = kstrdup(old_func->old_name, GFP_KERNEL);
+		if (!func->old_name) {
+			kfree(func);
+			return NULL;
+		}
+	}
+
+	/*
+	 * func->new_addr is same as func->old_addr. These addresses are
+	 * set when the object is loaded, see klp_init_object_loaded().
+	 */
+	func->old_sympos = old_func->old_sympos;
+	func->nop = true;
+
+	return func;
+}
+
+static int klp_add_object_nops(struct klp_patch *patch,
+			       struct klp_object *old_obj)
+{
+	struct klp_object *obj;
+	struct klp_func *func, *old_func;
+
+	obj = klp_find_object(patch, old_obj);
+
+	if (!obj) {
+		obj = klp_alloc_object_dynamic(old_obj->name);
+		if (!obj)
+			return -ENOMEM;
+
+		list_add(&obj->node, &patch->obj_list);
+	}
+
+	klp_for_each_func(old_obj, old_func) {
+		func = klp_find_func(obj, old_func);
+		if (func)
+			continue;
+
+		func = klp_alloc_func_nop(old_func, obj);
+		if (!func)
+			return -ENOMEM;
+
+		list_add(&func->node, &obj->func_list);
+	}
+
+	return 0;
+}
+
+/*
+ * Add 'nop' functions which simply return to the caller to run
+ * the original function. The 'nop' functions are added to a
+ * patch to facilitate a 'replace' mode.
+ */
+static int klp_add_nops(struct klp_patch *patch)
+{
+	struct klp_patch *old_patch;
+	struct klp_object *old_obj;
+	int err = 0;
+
+	list_for_each_entry(old_patch, &klp_patches, list) {
+		klp_for_each_object(old_patch, old_obj) {
+			err = klp_add_object_nops(patch, old_obj);
+			if (err)
+				return err;
+		}
+	}
+
+	return 0;
+}
+
 static void klp_kobj_release_patch(struct kobject *kobj)
 {
 	struct klp_patch *patch;
@@ -451,6 +602,12 @@ static struct kobj_type klp_ktype_patch = {
 
 static void klp_kobj_release_object(struct kobject *kobj)
 {
+	struct klp_object *obj;
+
+	obj = container_of(kobj, struct klp_object, kobj);
+
+	if (obj->dynamic)
+		klp_free_object_dynamic(obj);
 }
 
 static struct kobj_type klp_ktype_object = {
@@ -460,6 +617,12 @@ static struct kobj_type klp_ktype_object = {
 
 static void klp_kobj_release_func(struct kobject *kobj)
 {
+	struct klp_func *func;
+
+	func = container_of(kobj, struct klp_func, kobj);
+
+	if (func->nop)
+		klp_free_func_nop(func);
 }
 
 static struct kobj_type klp_ktype_func = {
@@ -475,6 +638,8 @@ static void klp_free_funcs(struct klp_object *obj)
 		/* Might be called from klp_init_patch() error path. */
 		if (func->kobj.state_initialized)
 			kobject_put(&func->kobj);
+		else if (func->nop)
+			klp_free_func_nop(func);
 	}
 }
 
@@ -485,8 +650,12 @@ static void klp_free_object_loaded(struct klp_object *obj)
 
 	obj->mod = NULL;
 
-	klp_for_each_func(obj, func)
+	klp_for_each_func(obj, func) {
 		func->old_addr = 0;
+
+		if (func->nop)
+			func->new_addr = 0;
+	}
 }
 
 static void klp_free_objects(struct klp_patch *patch)
@@ -499,6 +668,8 @@ static void klp_free_objects(struct klp_patch *patch)
 		/* Might be called from klp_init_patch() error path. */
 		if (obj->kobj.state_initialized)
 			kobject_put(&obj->kobj);
+		else if (obj->dynamic)
+			klp_free_object_dynamic(obj);
 	}
 }
 
@@ -565,7 +736,14 @@ static void klp_free_patch_wait(struct klp_patch *patch)
 
 static int klp_init_func(struct klp_object *obj, struct klp_func *func)
 {
-	if (!func->old_name || !func->new_addr)
+	if (!func->old_name)
+		return -EINVAL;
+
+	/*
+	 * NOPs get the address later. The patched module must be loaded,
+	 * see klp_init_object_loaded().
+	 */
+	if (!func->new_addr && !func->nop)
 		return -EINVAL;
 
 	if (strlen(func->old_name) >= KSYM_NAME_LEN)
@@ -623,6 +801,9 @@ static int klp_init_object_loaded(struct klp_patch *patch,
 			return -ENOENT;
 		}
 
+		if (func->nop)
+			func->new_addr = func->old_addr;
+
 		ret = kallsyms_lookup_size_offset(func->new_addr,
 						  &func->new_size, NULL);
 		if (!ret) {
@@ -641,7 +822,7 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
 	int ret;
 	const char *name;
 
-	if (!obj->funcs)
+	if (!obj->funcs && !obj->dynamic)
 		return -EINVAL;
 
 	if (klp_is_module(obj) && strlen(obj->name) >= MODULE_NAME_LEN)
@@ -698,6 +879,12 @@ static int klp_init_patch(struct klp_patch *patch)
 		return ret;
 	}
 
+	if (patch->replace) {
+		ret = klp_add_nops(patch);
+		if (ret)
+			return ret;
+	}
+
 	klp_for_each_object(patch, obj) {
 		ret = klp_init_object(patch, obj);
 		if (ret)
@@ -863,6 +1050,34 @@ int klp_enable_patch(struct klp_patch *patch)
 EXPORT_SYMBOL_GPL(klp_enable_patch);
 
 /*
+ * This function removes replaced patches.
+ *
+ * We could be pretty aggressive here. It is called in the situation where
+ * these structures are no longer accessible. All functions are redirected
+ * by the klp_transition_patch. They use either a new code or they are in
+ * the original code because of the special nop function patches.
+ *
+ * The only exception is when the transition was forced. In this case,
+ * klp_ftrace_handler() might still see the replaced patch on the stack.
+ * Fortunately, it is carefully designed to work with removed functions
+ * thanks to RCU. We only have to keep the patches on the system. Also
+ * this is handled transparently by patch->module_put.
+ */
+void klp_discard_replaced_patches(struct klp_patch *new_patch)
+{
+	struct klp_patch *old_patch, *tmp_patch;
+
+	list_for_each_entry_safe(old_patch, tmp_patch, &klp_patches, list) {
+		if (old_patch == new_patch)
+			return;
+
+		old_patch->enabled = false;
+		klp_unpatch_objects(old_patch);
+		klp_free_patch_nowait(old_patch);
+	}
+}
+
+/*
  * Remove parts of patches that touch a given kernel module. The list of
  * patches processed might be limited. When limit is NULL, all patches
  * will be handled.
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index d53b3ec83114..1800ba026e73 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -8,6 +8,7 @@ extern struct mutex klp_mutex;
 extern struct list_head klp_patches;
 
 void klp_free_patch_nowait(struct klp_patch *patch);
+void klp_discard_replaced_patches(struct klp_patch *new_patch);
 
 static inline bool klp_is_object_loaded(struct klp_object *obj)
 {
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 82927f59d3ff..7754510116d7 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -118,7 +118,15 @@ static void notrace klp_ftrace_handler(unsigned long ip,
 		}
 	}
 
+	/*
+	 * NOPs are used to replace existing patches with original code.
+	 * Do nothing! Setting pc would cause an infinite loop.
+	 */
+	if (func->nop)
+		goto unlock;
+
 	klp_arch_set_pc(regs, func->new_addr);
+
 unlock:
 	preempt_enable_notrace();
 }
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index d716757aa539..468a7b3305ec 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -85,6 +85,9 @@ static void klp_complete_transition(void)
 		 klp_transition_patch->mod->name,
 		 klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
 
+	if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED)
+		klp_discard_replaced_patches(klp_transition_patch);
+
 	if (klp_target_state == KLP_UNPATCHED) {
 		/*
 		 * All tasks have transitioned to KLP_UNPATCHED so we can now
-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v12 09/12] livepatch: Remove Nop structures when unused
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
                   ` (7 preceding siblings ...)
  2018-08-28 14:35 ` [PATCH v12 08/12] livepatch: Add atomic replace Petr Mladek
@ 2018-08-28 14:36 ` Petr Mladek
  2018-09-04 14:50   ` Miroslav Benes
  2018-08-28 14:36 ` [PATCH v12 10/12] livepatch: Atomic replace and cumulative patches documentation Petr Mladek
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:36 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel, Petr Mladek

Replaced patches are removed from the stack when the transition is
finished. It means that Nop structures will never be needed again
and can be removed. Why should we care?

  + Nop structures make false feeling that the function is patched
    even though the ftrace handler has no effect.

  + Ftrace handlers are not completely for free. They cause slowdown that
    might be visible in some workloads. The ftrace-related slowdown might
    actually be the reason why the function is not longer patched in
    the new cumulative patch. One would expect that cumulative patch
    would allow to solve these problems as well.

  + Cumulative patches are supposed to replace any earlier version of
    the patch. The amount of NOPs depends on which version was replaced.
    This multiplies the amount of scenarios that might happen.

    One might say that NOPs are innocent. But there are even optimized
    NOP instructions for different processor, for example, see
    arch/x86/kernel/alternative.c. And klp_ftrace_handler() is much
    more complicated.

  + It sounds natural to clean up a mess that is not longer needed.
    It could only be worse if we do not do it.

This patch allows to unpatch and free the dynamic structures independently
when the transition finishes.

The free part is a bit tricky because kobject free callbacks are called
asynchronously. We could not wait for them easily. Fortunately, we do
not have to. Any further access can be avoided by removing them from
the dynamic lists.

Signed-off-by: Petr Mladek <pmladek@suse.com>
---
 include/linux/livepatch.h     |  6 ++++
 kernel/livepatch/core.c       | 72 ++++++++++++++++++++++++++++++++++++++-----
 kernel/livepatch/core.h       |  2 +-
 kernel/livepatch/patch.c      | 31 ++++++++++++++++---
 kernel/livepatch/patch.h      |  1 +
 kernel/livepatch/transition.c |  2 +-
 6 files changed, 99 insertions(+), 15 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 97c3f366cf18..5d897a396dc4 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -214,6 +214,9 @@ struct klp_patch {
 #define klp_for_each_object_static(patch, obj) \
 	for (obj = patch->objs; obj->funcs || obj->name; obj++)
 
+#define klp_for_each_object_safe(patch, obj, tmp_obj)		\
+	list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, node)
+
 #define klp_for_each_object(patch, obj)	\
 	list_for_each_entry(obj, &patch->obj_list, node)
 
@@ -222,6 +225,9 @@ struct klp_patch {
 	     func->old_name || func->new_addr || func->old_sympos; \
 	     func++)
 
+#define klp_for_each_func_safe(obj, func, tmp_func)			\
+	list_for_each_entry_safe(func, tmp_func, &obj->func_list, node)
+
 #define klp_for_each_func(obj, func)	\
 	list_for_each_entry(func, &obj->func_list, node)
 
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index db12c86c4f26..695d565f23c1 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -630,11 +630,20 @@ static struct kobj_type klp_ktype_func = {
 	.sysfs_ops = &kobj_sysfs_ops,
 };
 
-static void klp_free_funcs(struct klp_object *obj)
+static void __klp_free_funcs(struct klp_object *obj, bool free_all)
 {
-	struct klp_func *func;
+	struct klp_func *func, *tmp_func;
+
+	klp_for_each_func_safe(obj, func, tmp_func) {
+		if (!free_all && !func->nop)
+			continue;
+
+		/*
+		 * Avoid double free. It would be tricky to wait for kobject
+		 * callbacks when only NOPs are handled.
+		 */
+		list_del(&func->node);
 
-	klp_for_each_func(obj, func) {
 		/* Might be called from klp_init_patch() error path. */
 		if (func->kobj.state_initialized)
 			kobject_put(&func->kobj);
@@ -658,12 +667,21 @@ static void klp_free_object_loaded(struct klp_object *obj)
 	}
 }
 
-static void klp_free_objects(struct klp_patch *patch)
+static void __klp_free_objects(struct klp_patch *patch, bool free_all)
 {
-	struct klp_object *obj;
+	struct klp_object *obj, *tmp_obj;
 
-	klp_for_each_object(patch, obj) {
-		klp_free_funcs(obj);
+	klp_for_each_object_safe(patch, obj, tmp_obj) {
+		__klp_free_funcs(obj, free_all);
+
+		if (!free_all && !obj->dynamic)
+			continue;
+
+		/*
+		 * Avoid double free. It would be tricky to wait for kobject
+		 * callbacks when only dynamic objects are handled.
+		 */
+		list_del(&obj->node);
 
 		/* Might be called from klp_init_patch() error path. */
 		if (obj->kobj.state_initialized)
@@ -673,6 +691,16 @@ static void klp_free_objects(struct klp_patch *patch)
 	}
 }
 
+static void klp_free_objects(struct klp_patch *patch)
+{
+	__klp_free_objects(patch, true);
+}
+
+void klp_free_objects_dynamic(struct klp_patch *patch)
+{
+	__klp_free_objects(patch, false);
+}
+
 static void __klp_free_patch(struct klp_patch *patch)
 {
 	if (!list_empty(&patch->list))
@@ -1063,7 +1091,7 @@ EXPORT_SYMBOL_GPL(klp_enable_patch);
  * thanks to RCU. We only have to keep the patches on the system. Also
  * this is handled transparently by patch->module_put.
  */
-void klp_discard_replaced_patches(struct klp_patch *new_patch)
+static void klp_discard_replaced_patches(struct klp_patch *new_patch)
 {
 	struct klp_patch *old_patch, *tmp_patch;
 
@@ -1078,6 +1106,34 @@ void klp_discard_replaced_patches(struct klp_patch *new_patch)
 }
 
 /*
+ * This function removes the dynamically allocated 'nop' functions.
+ *
+ * We could be pretty aggressive. NOPs do not change the existing
+ * behavior except for adding unnecessary delay by the ftrace handler.
+ *
+ * It is safe even when the transition was forced. The ftrace handler
+ * will see a valid ops->func_stack entry thanks to RCU.
+ *
+ * We could even free the NOPs structures. They must be the last entry
+ * in ops->func_stack. Therefore unregister_ftrace_function() is called.
+ * It does the same as klp_synchronize_transition() to make sure that
+ * nobody is inside the ftrace handler once the operation finishes.
+ *
+ * IMPORTANT: It must be called right after removing the replaced patches!
+ */
+static void klp_discard_nops(struct klp_patch *new_patch)
+{
+	klp_unpatch_objects_dynamic(klp_transition_patch);
+	klp_free_objects_dynamic(klp_transition_patch);
+}
+
+void klp_discard_replaced_stuff(struct klp_patch *new_patch)
+{
+	klp_discard_replaced_patches(new_patch);
+	klp_discard_nops(new_patch);
+}
+
+/*
  * Remove parts of patches that touch a given kernel module. The list of
  * patches processed might be limited. When limit is NULL, all patches
  * will be handled.
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index 1800ba026e73..f3d7aeba5e1d 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -8,7 +8,7 @@ extern struct mutex klp_mutex;
 extern struct list_head klp_patches;
 
 void klp_free_patch_nowait(struct klp_patch *patch);
-void klp_discard_replaced_patches(struct klp_patch *new_patch);
+void klp_discard_replaced_stuff(struct klp_patch *new_patch);
 
 static inline bool klp_is_object_loaded(struct klp_object *obj)
 {
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 7754510116d7..47f8ad59293a 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -244,15 +244,26 @@ static int klp_patch_func(struct klp_func *func)
 	return ret;
 }
 
-void klp_unpatch_object(struct klp_object *obj)
+static void __klp_unpatch_object(struct klp_object *obj, bool unpatch_all)
 {
 	struct klp_func *func;
 
-	klp_for_each_func(obj, func)
+	klp_for_each_func(obj, func) {
+		if (!unpatch_all && !func->nop)
+			continue;
+
 		if (func->patched)
 			klp_unpatch_func(func);
+	}
 
-	obj->patched = false;
+	if (unpatch_all || obj->dynamic)
+		obj->patched = false;
+}
+
+
+void klp_unpatch_object(struct klp_object *obj)
+{
+	__klp_unpatch_object(obj, true);
 }
 
 int klp_patch_object(struct klp_object *obj)
@@ -275,11 +286,21 @@ int klp_patch_object(struct klp_object *obj)
 	return 0;
 }
 
-void klp_unpatch_objects(struct klp_patch *patch)
+static void __klp_unpatch_objects(struct klp_patch *patch, bool unpatch_all)
 {
 	struct klp_object *obj;
 
 	klp_for_each_object(patch, obj)
 		if (obj->patched)
-			klp_unpatch_object(obj);
+			__klp_unpatch_object(obj, unpatch_all);
+}
+
+void klp_unpatch_objects(struct klp_patch *patch)
+{
+	__klp_unpatch_objects(patch, true);
+}
+
+void klp_unpatch_objects_dynamic(struct klp_patch *patch)
+{
+	__klp_unpatch_objects(patch, false);
 }
diff --git a/kernel/livepatch/patch.h b/kernel/livepatch/patch.h
index e72d8250d04b..cd8e1f03b22b 100644
--- a/kernel/livepatch/patch.h
+++ b/kernel/livepatch/patch.h
@@ -30,5 +30,6 @@ struct klp_ops *klp_find_ops(unsigned long old_addr);
 int klp_patch_object(struct klp_object *obj);
 void klp_unpatch_object(struct klp_object *obj);
 void klp_unpatch_objects(struct klp_patch *patch);
+void klp_unpatch_objects_dynamic(struct klp_patch *patch);
 
 #endif /* _LIVEPATCH_PATCH_H */
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index 468a7b3305ec..24f7a90d0042 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -86,7 +86,7 @@ static void klp_complete_transition(void)
 		 klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
 
 	if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED)
-		klp_discard_replaced_patches(klp_transition_patch);
+		klp_discard_replaced_stuff(klp_transition_patch);
 
 	if (klp_target_state == KLP_UNPATCHED) {
 		/*
-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v12 10/12] livepatch: Atomic replace and cumulative patches documentation
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
                   ` (8 preceding siblings ...)
  2018-08-28 14:36 ` [PATCH v12 09/12] livepatch: Remove Nop structures when unused Petr Mladek
@ 2018-08-28 14:36 ` Petr Mladek
  2018-09-04 15:15   ` Miroslav Benes
  2018-08-28 14:36 ` [PATCH v12 11/12] livepatch: Remove ordering and refuse loading conflicting patches Petr Mladek
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:36 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel, Petr Mladek

User documentation for the atomic replace feature. It makes it easier
to maintain livepatches using so-called cumulative patches.

Signed-off-by: Petr Mladek <pmladek@suse.com>
---
 Documentation/livepatch/cumulative-patches.txt | 105 +++++++++++++++++++++++++
 1 file changed, 105 insertions(+)
 create mode 100644 Documentation/livepatch/cumulative-patches.txt

diff --git a/Documentation/livepatch/cumulative-patches.txt b/Documentation/livepatch/cumulative-patches.txt
new file mode 100644
index 000000000000..206b7f98d270
--- /dev/null
+++ b/Documentation/livepatch/cumulative-patches.txt
@@ -0,0 +1,105 @@
+===================================
+Atomic Replace & Cumulative Patches
+===================================
+
+There might be dependencies between livepatches. If multiple patches need
+to do different changes to the same function(s) then we need to define
+an order in which the patches will be installed. And function implementations
+from any newer livepatch must be done on top of the older ones.
+
+This might become a maintenance nightmare. Especially if anyone would want
+to remove a patch that is in the middle of the stack.
+
+An elegant solution comes with the feature called "Atomic Replace". It allows
+to create so called "Cumulative Patches". They include all wanted changes
+from all older livepatches and completely replace them in one transition.
+
+Usage
+-----
+
+The atomic replace can be enabled by setting "replace" flag in struct klp_patch,
+for example:
+
+	static struct klp_patch patch = {
+		.mod = THIS_MODULE,
+		.objs = objs,
+		.replace = true,
+	};
+
+Such a patch is added on top of the livepatch stack when registered. It can
+be enabled even when some earlier patches have not been enabled yet.
+
+All processes are then migrated to use the code only from the new patch.
+Once the transition is finished, all older patches are removed from the stack
+of patches. Even the older not-enabled patches mentioned above. They can
+even be unregistered and the related modules unloaded.
+
+Ftrace handlers are transparently removed from functions that are no
+longer modified by the new cumulative patch.
+
+As a result, the livepatch authors might maintain sources only for one
+cumulative patch. It helps to keep the patch consistent while adding or
+removing various fixes or features.
+
+Users could keep only the last patch installed on the system after
+the transition to has finished. It helps to clearly see what code is
+actually in use. Also the livepatch might then be seen as a "normal"
+module that modifies the kernel behavior. The only difference is that
+it can be updated at runtime without breaking its functionality.
+
+
+Features
+--------
+
+The atomic replace allows:
+
+  + Atomically revert some functions in a previous patch while
+    upgrading other functions.
+
+  + Remove eventual performance impact caused by core redirection
+    for functions that are no longer patched.
+
+  + Decrease user confusion about stacking order and what patches are
+    currently in effect.
+
+
+Limitations:
+------------
+
+  + Replaced patches can no longer be enabled. But if the transition
+    to the cumulative patch was not forced, the kernel modules with
+    the older livepatches can be removed and eventually added again.
+
+    A good practice is to set .replace flag in any released livepatch.
+    Then re-adding an older livepatch is equivalent to downgrading
+    to that patch. This is safe as long as the livepatches do _not_ do
+    extra modifications in (un)patching callbacks or in the module_init()
+    or module_exit() functions, see below.
+
+
+  + Only the (un)patching callbacks from the _new_ cumulative livepatch are
+    executed. Any callbacks from the replaced patches are ignored.
+
+    By other words, the cumulative patch is responsible for doing any actions
+    that are necessary to properly replace any older patch.
+
+    As a result, it might be dangerous to replace newer cumulative patches by
+    older ones. The old livepatches might not provide the necessary callbacks.
+
+    This might be seen as a limitation in some scenarios. But it makes the life
+    easier in many others. Only the new cumulative livepatch knows what
+    fixes/features are added/removed and what special actions are necessary
+    for a smooth transition.
+
+    In each case, it would be a nightmare to think about the order of
+    the various callbacks and their interactions if the callbacks from all
+    enabled patches were called.
+
+
+  + There is no special handling of shadow variables. Livepatch authors
+    must create their own rules how to pass them from one cumulative
+    patch to the other. Especially they should not blindly remove them
+    in module_exit() functions.
+
+    A good practice might be to remove shadow variables in the post-unpatch
+    callback. It is called only when the livepatch is properly disabled.
-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v12 11/12] livepatch: Remove ordering and refuse loading conflicting patches
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
                   ` (9 preceding siblings ...)
  2018-08-28 14:36 ` [PATCH v12 10/12] livepatch: Atomic replace and cumulative patches documentation Petr Mladek
@ 2018-08-28 14:36 ` Petr Mladek
  2018-08-28 14:36 ` [PATCH v12 12/12] selftests/livepatch: introduce tests Petr Mladek
  2018-08-30 11:58 ` [PATCH v12 00/12] Miroslav Benes
  12 siblings, 0 replies; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:36 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel, Petr Mladek

The atomic replace and cumulative patches were introduced as a more secure
way to handle dependent patches. They simplify the logic:

  + Any new cumulative patch is supposed to take over shadow variables
    and changes made by callbacks from previous livepatches.

  + All replaced patches are discarded and the modules can be unloaded.
    As a result, there is only one scenario when a cumulative livepatch
    gets disabled.

The different handling of "normal" and cumulative patches might cause
confusion. It would make sense to keep only one mode. On the other hand,
it would be rude to enforce using the cumulative livepatches even for
trivial and independent (hot) fixes.

This patch removes the stack of patches. The list of enabled patches
is still needed but the ordering is not longer enforced.

Note that it is not possible to catch all possible dependencies. It is
the responsibility of the livepatch authors to decide.

Nevertheless this patch prevents having two patches for the same function
enabled at the same time after the transition finishes. It might help
to catch obvious mistakes. But more importantly, we do not need to
handle situation when a patch in the middle of the function stack
(ops->func_stack) is being removed.

Signed-off-by: Petr Mladek <pmladek@suse.com>
---
 Documentation/livepatch/livepatch.txt | 30 +++++++++++--------
 kernel/livepatch/core.c               | 56 +++++++++++++++++++++++++++++++----
 2 files changed, 68 insertions(+), 18 deletions(-)

diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt
index 7fb01d27d81d..8d985cab0a21 100644
--- a/Documentation/livepatch/livepatch.txt
+++ b/Documentation/livepatch/livepatch.txt
@@ -141,9 +141,9 @@ without HAVE_RELIABLE_STACKTRACE are not considered fully supported by
 the kernel livepatching.
 
 The /sys/kernel/livepatch/<patch>/transition file shows whether a patch
-is in transition.  Only a single patch (the topmost patch on the stack)
-can be in transition at a given time.  A patch can remain in transition
-indefinitely, if any of the tasks are stuck in the initial patch state.
+is in transition.  Only a single patch can be in transition at a given
+time.  A patch can remain in transition indefinitely, if any of the tasks
+are stuck in the initial patch state.
 
 A transition can be reversed and effectively canceled by writing the
 opposite value to the /sys/kernel/livepatch/<patch>/enabled file while
@@ -327,9 +327,10 @@ successfully disabled via the sysfs interface.
 Livepatch modules have to call klp_enable_patch() in module_init() callback.
 This function is rather complex and might even fail in the early phase.
 
-First, the addresses of the patched functions are found according to their
-names. The special relocations, mentioned in the section "New functions",
-are applied. The relevant entries are created under
+First, possible conflicts are checked for non-cummulative patches with
+disabled replace flag. The addresses of the patched functions are found
+according to their names. The special relocations, mentioned in the section
+"New functions", are applied. The relevant entries are created under
 /sys/kernel/livepatch/<name>. The patch is rejected when any above
 operation fails.
 
@@ -343,11 +344,11 @@ this process, see the "Consistency model" section.
 Finally, once all tasks have been patched, the 'transition' value changes
 to '0'.
 
-[*] Note that functions might be patched multiple times. The ftrace handler
-    is registered only once for a given function. Further patches just add
-    an entry to the list (see field `func_stack`) of the struct klp_ops.
-    The right implementation is selected by the ftrace handler, see
-    the "Consistency model" section.
+[*] Note that two patches might modify the same function during the transition
+    to a new cumulative patch. The ftrace handler is registered only once
+    for a given function. The new patch just adds an entry to the list
+    (see field `func_stack`) of the struct klp_ops. The right implementation
+    is selected by the ftrace handler, see the "Consistency model" section.
 
 
 5.2. Disabling
@@ -374,8 +375,11 @@ Third, the sysfs interface is destroyed.
 Finally, the module can be removed if the transition was not forced and the
 last sysfs entry has gone.
 
-Note that patches must be disabled in exactly the reverse order in which
-they were enabled. It makes the problem and the implementation much easier.
+Note that any patch dependencies have to be handled by the atomic replace
+and cumulative patches, see Documentation/livepatch/cumulative-patches.txt.
+Therefore there is usually only one patch enabled on the system. There is
+still possibility to have more trivial and independent livepatches enabled
+at the same time. These can be enabled and disabled in any order.
 
 
 6. Sysfs
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 695d565f23c1..f3e199e8b767 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -137,6 +137,47 @@ static struct klp_object *klp_find_object(struct klp_patch *patch,
 	return NULL;
 }
 
+static int klp_check_obj_conflict(struct klp_patch *patch,
+				  struct klp_object *old_obj)
+{
+	struct klp_object *obj;
+	struct klp_func *func, *old_func;
+
+	obj = klp_find_object(patch, old_obj);
+	if (!obj)
+		return 0;
+
+	klp_for_each_func(old_obj, old_func) {
+		func = klp_find_func(obj, old_func);
+		if (!func)
+			continue;
+
+		pr_err("Function '%s,%lu' in object '%s' has already been livepatched.\n",
+		       func->old_name, func->old_sympos ? func->old_sympos : 1,
+		       obj->name ? obj->name : "vmlinux");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int klp_check_patch_conflict(struct klp_patch *patch)
+{
+	struct klp_patch *old_patch;
+	struct klp_object *old_obj;
+	int ret;
+
+	list_for_each_entry(old_patch, &klp_patches, list) {
+		klp_for_each_object(old_patch, old_obj) {
+			ret = klp_check_obj_conflict(patch, old_obj);
+			if (ret)
+				return ret;
+		}
+	}
+
+	return 0;
+}
+
 struct klp_find_arg {
 	const char *objname;
 	const char *name;
@@ -888,7 +929,6 @@ static int klp_init_patch(struct klp_patch *patch)
 	patch->module_put = false;
 	INIT_LIST_HEAD(&patch->list);
 	init_completion(&patch->finish);
-	klp_init_lists(patch);
 
 	if (!patch->objs)
 		return -EINVAL;
@@ -934,10 +974,6 @@ static int __klp_disable_patch(struct klp_patch *patch)
 	if (klp_transition_patch)
 		return -EBUSY;
 
-	/* enforce stacking: only the last enabled patch can be disabled */
-	if (!list_is_last(&patch->list, &klp_patches))
-		return -EBUSY;
-
 	klp_init_transition(patch, KLP_UNPATCHED);
 
 	klp_for_each_object(patch, obj)
@@ -1052,8 +1088,18 @@ int klp_enable_patch(struct klp_patch *patch)
 		return -ENOSYS;
 	}
 
+
 	mutex_lock(&klp_mutex);
 
+	/* Allow to use the dynamic lists in the check for conflicts. */
+	klp_init_lists(patch);
+
+	if (!patch->replace && klp_check_patch_conflict(patch)) {
+		pr_err("Use cumulative livepatches for dependent changes.\n");
+		mutex_unlock(&klp_mutex);
+		return -EINVAL;
+	}
+
 	ret = klp_init_patch(patch);
 	if (ret)
 		goto err;
-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v12 12/12] selftests/livepatch: introduce tests
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
                   ` (10 preceding siblings ...)
  2018-08-28 14:36 ` [PATCH v12 11/12] livepatch: Remove ordering and refuse loading conflicting patches Petr Mladek
@ 2018-08-28 14:36 ` Petr Mladek
  2018-08-30 11:58 ` [PATCH v12 00/12] Miroslav Benes
  12 siblings, 0 replies; 34+ messages in thread
From: Petr Mladek @ 2018-08-28 14:36 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Jessica Yu, Evgenii Shatokhin,
	live-patching, linux-kernel

From: Joe Lawrence <joe.lawrence@redhat.com>

Add a few livepatch modules and simple target modules that the included
regression suite can run tests against:

  - basic livepatching (multiple patches, atomic replace)
  - pre/post (un)patch callbacks
  - shadow variable API

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
---
 Documentation/livepatch/callbacks.txt              | 489 +----------------
 MAINTAINERS                                        |   1 +
 lib/Kconfig.debug                                  |  21 +
 lib/Makefile                                       |   2 +
 lib/livepatch/Makefile                             |  15 +
 lib/livepatch/test_klp_atomic_replace.c            |  53 ++
 lib/livepatch/test_klp_callbacks_busy.c            |  43 ++
 lib/livepatch/test_klp_callbacks_demo.c            | 109 ++++
 lib/livepatch/test_klp_callbacks_demo2.c           |  89 ++++
 lib/livepatch/test_klp_callbacks_mod.c             |  24 +
 lib/livepatch/test_klp_livepatch.c                 |  47 ++
 lib/livepatch/test_klp_shadow_vars.c               | 236 +++++++++
 tools/testing/selftests/Makefile                   |   1 +
 tools/testing/selftests/livepatch/Makefile         |   8 +
 tools/testing/selftests/livepatch/README           |  43 ++
 tools/testing/selftests/livepatch/config           |   1 +
 tools/testing/selftests/livepatch/functions.sh     | 203 +++++++
 .../testing/selftests/livepatch/test-callbacks.sh  | 587 +++++++++++++++++++++
 .../testing/selftests/livepatch/test-livepatch.sh  | 168 ++++++
 .../selftests/livepatch/test-shadow-vars.sh        |  60 +++
 20 files changed, 1716 insertions(+), 484 deletions(-)
 create mode 100644 lib/livepatch/Makefile
 create mode 100644 lib/livepatch/test_klp_atomic_replace.c
 create mode 100644 lib/livepatch/test_klp_callbacks_busy.c
 create mode 100644 lib/livepatch/test_klp_callbacks_demo.c
 create mode 100644 lib/livepatch/test_klp_callbacks_demo2.c
 create mode 100644 lib/livepatch/test_klp_callbacks_mod.c
 create mode 100644 lib/livepatch/test_klp_livepatch.c
 create mode 100644 lib/livepatch/test_klp_shadow_vars.c
 create mode 100644 tools/testing/selftests/livepatch/Makefile
 create mode 100644 tools/testing/selftests/livepatch/README
 create mode 100644 tools/testing/selftests/livepatch/config
 create mode 100644 tools/testing/selftests/livepatch/functions.sh
 create mode 100755 tools/testing/selftests/livepatch/test-callbacks.sh
 create mode 100755 tools/testing/selftests/livepatch/test-livepatch.sh
 create mode 100755 tools/testing/selftests/livepatch/test-shadow-vars.sh

diff --git a/Documentation/livepatch/callbacks.txt b/Documentation/livepatch/callbacks.txt
index c9776f48e458..182e31d4abce 100644
--- a/Documentation/livepatch/callbacks.txt
+++ b/Documentation/livepatch/callbacks.txt
@@ -118,488 +118,9 @@ similar change to their hw_features value.  (Client functions of the
 value may need to be updated accordingly.)
 
 
-Test cases
-==========
-
-What follows is not an exhaustive test suite of every possible livepatch
-pre/post-(un)patch combination, but a selection that demonstrates a few
-important concepts.  Each test case uses the kernel modules located in
-the samples/livepatch/ and assumes that no livepatches are loaded at the
-beginning of the test.
-
-
-Test 1
-------
-
-Test a combination of loading a kernel module and a livepatch that
-patches a function in the first module.  (Un)load the target module
-before the livepatch module:
-
-- load target module
-- load livepatch
-- disable livepatch
-- unload target module
-- unload livepatch
-
-First load a target module:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   34.475708] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-On livepatch enable, before the livepatch transition starts, pre-patch
-callbacks are executed for vmlinux and livepatch_callbacks_mod (those
-klp_objects currently loaded).  After klp_objects are patched according
-to the klp_patch, their post-patch callbacks run and the transition
-completes:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   36.503719] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   36.504213] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   36.504238] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   36.504721] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   36.505849] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   37.727133] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   37.727232] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   37.727860] livepatch_callbacks_demo: post_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   37.728792] livepatch: 'livepatch_callbacks_demo': patching complete
-
-Similarly, on livepatch disable, pre-patch callbacks run before the
-unpatching transition starts.  klp_objects are reverted, post-patch
-callbacks execute and the transition completes:
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [   38.510209] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [   38.510234] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [   38.510982] livepatch_callbacks_demo: pre_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   38.512209] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [   39.711132] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   39.711210] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [   39.711779] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   39.712735] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   42.534183] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-
-
-Test 2
-------
-
-This test is similar to the previous test, but (un)load the livepatch
-module before the target kernel module.  This tests the livepatch core's
-module_coming handler:
-
-- load livepatch
-- load target module
-- disable livepatch
-- unload livepatch
-- unload target module
-
-
-On livepatch enable, only pre/post-patch callbacks are executed for
-currently loaded klp_objects, in this case, vmlinux:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   44.553328] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   44.553997] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   44.554049] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   44.554845] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   45.727128] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   45.727212] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   45.727961] livepatch: 'livepatch_callbacks_demo': patching complete
-
-When a targeted module is subsequently loaded, only its pre/post-patch
-callbacks are executed:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   46.560845] livepatch: applying patch 'livepatch_callbacks_demo' to loading module 'livepatch_callbacks_mod'
-  [   46.561988] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [   46.563452] livepatch_callbacks_demo: post_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [   46.565495] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-On livepatch disable, all currently loaded klp_objects' (vmlinux and
-livepatch_callbacks_mod) pre/post-unpatch callbacks are executed:
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [   48.568885] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [   48.568910] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [   48.569441] livepatch_callbacks_demo: pre_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   48.570502] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [   49.759091] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   49.759171] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [   49.759742] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   49.760690] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   52.592283] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-
-
-Test 3
-------
-
-Test loading the livepatch after a targeted kernel module, then unload
-the kernel module before disabling the livepatch.  This tests the
-livepatch core's module_going handler:
-
-- load target module
-- load livepatch
-- unload target module
-- disable livepatch
-- unload livepatch
-
-First load a target module, then the livepatch:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   54.607948] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   56.613919] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   56.614411] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   56.614436] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   56.614818] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   56.615656] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   57.759070] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   57.759147] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   57.759621] livepatch_callbacks_demo: post_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   57.760307] livepatch: 'livepatch_callbacks_demo': patching complete
-
-When a target module is unloaded, the livepatch is only reverted from
-that klp_object (livepatch_callbacks_mod).  As such, only its pre and
-post-unpatch callbacks are executed when this occurs:
-
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   58.623409] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-  [   58.623903] livepatch_callbacks_demo: pre_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-  [   58.624658] livepatch: reverting patch 'livepatch_callbacks_demo' on unloading module 'livepatch_callbacks_mod'
-  [   58.625305] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-
-When the livepatch is disabled, pre and post-unpatch callbacks are run
-for the remaining klp_object, vmlinux:
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [   60.638420] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [   60.638444] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [   60.638996] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [   61.727088] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   61.727165] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [   61.727985] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-
-
-Test 4
-------
-
-This test is similar to the previous test, however the livepatch is
-loaded first.  This tests the livepatch core's module_coming and
-module_going handlers:
-
-- load livepatch
-- load target module
-- unload target module
-- disable livepatch
-- unload livepatch
-
-First load the livepatch:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   64.661552] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   64.662147] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   64.662175] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   64.662850] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   65.695056] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   65.695147] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   65.695561] livepatch: 'livepatch_callbacks_demo': patching complete
-
-When a targeted kernel module is subsequently loaded, only its
-pre/post-patch callbacks are executed:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   66.669196] livepatch: applying patch 'livepatch_callbacks_demo' to loading module 'livepatch_callbacks_mod'
-  [   66.669882] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [   66.670744] livepatch_callbacks_demo: post_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [   66.672873] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-When the target module is unloaded, the livepatch is only reverted from
-the livepatch_callbacks_mod klp_object.  As such, only pre and
-post-unpatch callbacks are executed when this occurs:
-
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   68.680065] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-  [   68.680688] livepatch_callbacks_demo: pre_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-  [   68.681452] livepatch: reverting patch 'livepatch_callbacks_demo' on unloading module 'livepatch_callbacks_mod'
-  [   68.682094] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [   70.689225] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [   70.689256] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [   70.689882] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [   71.711080] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   71.711481] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [   71.711988] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-
-
-Test 5
-------
-
-A simple test of loading a livepatch without one of its patch target
-klp_objects ever loaded (livepatch_callbacks_mod):
-
-- load livepatch
-- disable livepatch
-- unload livepatch
-
-Load the livepatch:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   74.711081] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   74.711595] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   74.711639] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   74.712272] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   75.743137] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   75.743219] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   75.743867] livepatch: 'livepatch_callbacks_demo': patching complete
-
-As expected, only pre/post-(un)patch handlers are executed for vmlinux:
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [   76.716254] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [   76.716278] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [   76.716666] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [   77.727089] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   77.727194] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [   77.727907] livepatch: 'livepatch_callbacks_demo': unpatching complete
+Other Examples
+==============
 
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-
-
-Test 6
-------
-
-Test a scenario where a vmlinux pre-patch callback returns a non-zero
-status (ie, failure):
-
-- load target module
-- load livepatch -ENODEV
-- unload target module
-
-First load a target module:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   80.740520] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-Load the livepatch module, setting its 'pre_patch_ret' value to -19
-(-ENODEV).  When its vmlinux pre-patch callback executed, this status
-code will propagate back to the module-loading subsystem.  The result is
-that the insmod command refuses to load the livepatch module:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko pre_patch_ret=-19
-  [   82.747326] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   82.747743] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   82.747767] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   82.748237] livepatch: pre-patch callback failed for object 'vmlinux'
-  [   82.748637] livepatch: failed to enable patch 'livepatch_callbacks_demo'
-  [   82.749059] livepatch: 'livepatch_callbacks_demo': canceling transition, going to unpatch
-  [   82.749060] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   82.749868] livepatch: 'livepatch_callbacks_demo': unpatching complete
-  [   82.765809] insmod: ERROR: could not insert module samples/livepatch/livepatch-callbacks-demo.ko: No such device
-
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   84.774238] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-
-
-Test 7
-------
-
-Similar to the previous test, setup a livepatch such that its vmlinux
-pre-patch callback returns success.  However, when a targeted kernel
-module is later loaded, have the livepatch return a failing status code:
-
-- load livepatch
-- setup -ENODEV
-- load target module
-- disable livepatch
-- unload livepatch
-
-Load the livepatch, notice vmlinux pre-patch callback succeeds:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   86.787845] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   86.788325] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   86.788427] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   86.788821] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   87.711069] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   87.711143] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   87.711886] livepatch: 'livepatch_callbacks_demo': patching complete
-
-Set a trap so subsequent pre-patch callbacks to this livepatch will
-return -ENODEV:
-
-  % echo -19 > /sys/module/livepatch_callbacks_demo/parameters/pre_patch_ret
-
-The livepatch pre-patch callback for subsequently loaded target modules
-will return failure, so the module loader refuses to load the kernel
-module.  Notice that no post-patch or pre/post-unpatch callbacks are
-executed for this klp_object:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   90.796976] livepatch: applying patch 'livepatch_callbacks_demo' to loading module 'livepatch_callbacks_mod'
-  [   90.797834] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [   90.798900] livepatch: pre-patch callback failed for object 'livepatch_callbacks_mod'
-  [   90.799652] livepatch: patch 'livepatch_callbacks_demo' failed for module 'livepatch_callbacks_mod', refusing to load module 'livepatch_callbacks_mod'
-  [   90.819737] insmod: ERROR: could not insert module samples/livepatch/livepatch-callbacks-mod.ko: No such device
-
-However, pre/post-unpatch callbacks run for the vmlinux klp_object:
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [   92.823547] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [   92.823573] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [   92.824331] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [   93.727128] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   93.727327] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [   93.727861] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-
-
-Test 8
-------
-
-Test loading multiple targeted kernel modules.  This test-case is
-mainly for comparing with the next test-case.
-
-- load busy target module (0s sleep),
-- load livepatch
-- load target module
-- unload target module
-- disable livepatch
-- unload livepatch
-- unload busy target module
-
-
-Load a target "busy" kernel module which kicks off a worker function
-that immediately exits:
-
-  % insmod samples/livepatch/livepatch-callbacks-busymod.ko sleep_secs=0
-  [   96.910107] livepatch_callbacks_busymod: livepatch_callbacks_mod_init
-  [   96.910600] livepatch_callbacks_busymod: busymod_work_func, sleeping 0 seconds ...
-  [   96.913024] livepatch_callbacks_busymod: busymod_work_func exit
-
-Proceed with loading the livepatch and another ordinary target module,
-notice that the post-patch callbacks are executed and the transition
-completes quickly:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   98.917892] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   98.918426] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   98.918453] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   98.918955] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_busymod -> [MODULE_STATE_LIVE] Normal state
-  [   98.923835] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   99.743104] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   99.743156] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   99.743679] livepatch_callbacks_demo: post_patch_callback: livepatch_callbacks_busymod -> [MODULE_STATE_LIVE] Normal state
-  [   99.744616] livepatch: 'livepatch_callbacks_demo': patching complete
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [  100.930955] livepatch: applying patch 'livepatch_callbacks_demo' to loading module 'livepatch_callbacks_mod'
-  [  100.931668] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [  100.932645] livepatch_callbacks_demo: post_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [  100.934125] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [  102.942805] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-  [  102.943640] livepatch_callbacks_demo: pre_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-  [  102.944585] livepatch: reverting patch 'livepatch_callbacks_demo' on unloading module 'livepatch_callbacks_mod'
-  [  102.945455] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [  104.953815] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [  104.953838] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [  104.954431] livepatch_callbacks_demo: pre_unpatch_callback: livepatch_callbacks_busymod -> [MODULE_STATE_LIVE] Normal state
-  [  104.955426] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [  106.719073] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [  106.722633] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [  106.723282] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_busymod -> [MODULE_STATE_LIVE] Normal state
-  [  106.724279] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-  % rmmod samples/livepatch/livepatch-callbacks-busymod.ko
-  [  108.975660] livepatch_callbacks_busymod: livepatch_callbacks_mod_exit
-
-
-Test 9
-------
-
-A similar test as the previous one, but force the "busy" kernel module
-to do longer work.
-
-The livepatching core will refuse to patch a task that is currently
-executing a to-be-patched function -- the consistency model stalls the
-current patch transition until this safety-check is met.  Test a
-scenario where one of a livepatch's target klp_objects sits on such a
-function for a long time.  Meanwhile, load and unload other target
-kernel modules while the livepatch transition is in progress.
-
-- load busy target module (30s sleep)
-- load livepatch
-- load target module
-- unload target module
-- disable livepatch
-- unload livepatch
-- unload busy target module
-
-
-Load the "busy" kernel module, this time make it do 30 seconds worth of
-work:
-
-  % insmod samples/livepatch/livepatch-callbacks-busymod.ko sleep_secs=30
-  [  110.993362] livepatch_callbacks_busymod: livepatch_callbacks_mod_init
-  [  110.994059] livepatch_callbacks_busymod: busymod_work_func, sleeping 30 seconds ...
-
-Meanwhile, the livepatch is loaded.  Notice that the patch transition
-does not complete as the targeted "busy" module is sitting on a
-to-be-patched function:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [  113.000309] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [  113.000764] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [  113.000791] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [  113.001289] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_busymod -> [MODULE_STATE_LIVE] Normal state
-  [  113.005208] livepatch: 'livepatch_callbacks_demo': starting patching transition
-
-Load a second target module (this one is an ordinary idle kernel
-module).  Note that *no* post-patch callbacks will be executed while the
-livepatch is still in transition:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [  115.012740] livepatch: applying patch 'livepatch_callbacks_demo' to loading module 'livepatch_callbacks_mod'
-  [  115.013406] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [  115.015315] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-Request an unload of the simple kernel module.  The patch is still
-transitioning, so its pre-unpatch callbacks are skipped:
-
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [  117.022626] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-  [  117.023376] livepatch: reverting patch 'livepatch_callbacks_demo' on unloading module 'livepatch_callbacks_mod'
-  [  117.024533] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-
-Finally the livepatch is disabled.  Since none of the patch's
-klp_object's post-patch callbacks executed, the remaining klp_object's
-pre-unpatch callbacks are skipped:
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [  119.035408] livepatch: 'livepatch_callbacks_demo': reversing transition from patching to unpatching
-  [  119.035485] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [  119.711166] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [  119.714179] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [  119.714653] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_busymod -> [MODULE_STATE_LIVE] Normal state
-  [  119.715437] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-  % rmmod samples/livepatch/livepatch-callbacks-busymod.ko
-  [  141.279111] livepatch_callbacks_busymod: busymod_work_func exit
-  [  141.279760] livepatch_callbacks_busymod: livepatch_callbacks_mod_exit
+Sample livepatch modules demonstrating the callback API can be found in
+samples/livepatch/ directory.  These samples were modified for use in
+kselftests and can be found in the lib/livepatch directory.
diff --git a/MAINTAINERS b/MAINTAINERS
index a5b256b25905..87b370a97fca 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8506,6 +8506,7 @@ F:	arch/x86/kernel/livepatch.c
 F:	Documentation/livepatch/
 F:	Documentation/ABI/testing/sysfs-kernel-livepatch
 F:	samples/livepatch/
+F:	tools/testing/selftests/livepatch/
 L:	live-patching@vger.kernel.org
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching.git
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 613316724c6a..473ba2ba1ed8 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1965,6 +1965,27 @@ config TEST_DEBUG_VIRTUAL
 
 	  If unsure, say N.
 
+config TEST_LIVEPATCH
+	tristate "Test livepatching"
+	default n
+	depends on LIVEPATCH
+	depends on m
+	help
+	  Test kernel livepatching features for correctness.  The tests will
+	  load test modules that will be livepatched in various scenarios.
+
+	  To run all the livepatching tests:
+
+	  make -C tools/testing/selftests TARGETS=livepatch run_tests
+
+	  Alternatively, individual tests may be invoked:
+
+	  tools/testing/selftests/livepatch/test-callbacks.sh
+	  tools/testing/selftests/livepatch/test-livepatch.sh
+	  tools/testing/selftests/livepatch/test-shadow-vars.sh
+
+	  If unsure, say N.
+
 endif # RUNTIME_TESTING_MENU
 
 config MEMTEST
diff --git a/lib/Makefile b/lib/Makefile
index ca3f7ebb900d..ac7e8f9a4819 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -72,6 +72,8 @@ obj-$(CONFIG_TEST_PARMAN) += test_parman.o
 obj-$(CONFIG_TEST_KMOD) += test_kmod.o
 obj-$(CONFIG_TEST_DEBUG_VIRTUAL) += test_debug_virtual.o
 
+obj-$(CONFIG_TEST_LIVEPATCH) += livepatch/
+
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
 CFLAGS_kobject_uevent.o += -DDEBUG
diff --git a/lib/livepatch/Makefile b/lib/livepatch/Makefile
new file mode 100644
index 000000000000..26900ddaef82
--- /dev/null
+++ b/lib/livepatch/Makefile
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for livepatch test code.
+
+obj-$(CONFIG_TEST_LIVEPATCH) += test_klp_atomic_replace.o \
+				test_klp_callbacks_demo.o \
+				test_klp_callbacks_demo2.o \
+				test_klp_callbacks_busy.o \
+				test_klp_callbacks_mod.o \
+				test_klp_livepatch.o \
+				test_klp_shadow_vars.o
+
+# Target modules to be livepatched require CC_FLAGS_FTRACE
+CFLAGS_test_klp_callbacks_busy.o	+= $(CC_FLAGS_FTRACE)
+CFLAGS_test_klp_callbacks_mod.o		+= $(CC_FLAGS_FTRACE)
diff --git a/lib/livepatch/test_klp_atomic_replace.c b/lib/livepatch/test_klp_atomic_replace.c
new file mode 100644
index 000000000000..d741405c42a9
--- /dev/null
+++ b/lib/livepatch/test_klp_atomic_replace.c
@@ -0,0 +1,53 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/livepatch.h>
+
+static int replace;
+module_param(replace, int, 0644);
+MODULE_PARM_DESC(replace, "replace (default=0)");
+
+#include <linux/seq_file.h>
+static int livepatch_meminfo_proc_show(struct seq_file *m, void *v)
+{
+	seq_printf(m, "%s: %s\n", THIS_MODULE->name,
+		   "this has been live patched");
+	return 0;
+}
+
+static struct klp_func funcs[] = {
+	KLP_FUNC(meminfo_proc_show, livepatch_meminfo_proc_show),
+	KLP_FUNC_END
+};
+
+static struct klp_object objs[] = {
+	KLP_VMLINUX(funcs),
+	KLP_OBJECT_END
+};
+
+static struct klp_patch patch = {
+	.mod = THIS_MODULE,
+	.objs = objs,
+	/* set .replace in the init function below for demo purposes */
+};
+
+static int test_klp_atomic_replace_init(void)
+{
+	patch.replace = replace;
+	return klp_enable_patch(&patch);
+}
+
+static void test_klp_atomic_replace_exit(void)
+{
+}
+
+module_init(test_klp_atomic_replace_init);
+module_exit(test_klp_atomic_replace_exit);
+MODULE_LICENSE("GPL");
+MODULE_INFO(livepatch, "Y");
+MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: atomic replace");
diff --git a/lib/livepatch/test_klp_callbacks_busy.c b/lib/livepatch/test_klp_callbacks_busy.c
new file mode 100644
index 000000000000..40beddf8a0e2
--- /dev/null
+++ b/lib/livepatch/test_klp_callbacks_busy.c
@@ -0,0 +1,43 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/workqueue.h>
+#include <linux/delay.h>
+
+static int sleep_secs;
+module_param(sleep_secs, int, 0644);
+MODULE_PARM_DESC(sleep_secs, "sleep_secs (default=0)");
+
+static void busymod_work_func(struct work_struct *work);
+static DECLARE_DELAYED_WORK(work, busymod_work_func);
+
+static void busymod_work_func(struct work_struct *work)
+{
+	pr_info("%s, sleeping %d seconds ...\n", __func__, sleep_secs);
+	msleep(sleep_secs * 1000);
+	pr_info("%s exit\n", __func__);
+}
+
+static int test_klp_callbacks_busy_init(void)
+{
+	pr_info("%s\n", __func__);
+	schedule_delayed_work(&work,
+		msecs_to_jiffies(1000 * 0));
+	return 0;
+}
+
+static void test_klp_callbacks_busy_exit(void)
+{
+	cancel_delayed_work_sync(&work);
+	pr_info("%s\n", __func__);
+}
+
+module_init(test_klp_callbacks_busy_init);
+module_exit(test_klp_callbacks_busy_exit);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: busy target module");
diff --git a/lib/livepatch/test_klp_callbacks_demo.c b/lib/livepatch/test_klp_callbacks_demo.c
new file mode 100644
index 000000000000..9e9f3e219e32
--- /dev/null
+++ b/lib/livepatch/test_klp_callbacks_demo.c
@@ -0,0 +1,109 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/livepatch.h>
+
+static int pre_patch_ret;
+module_param(pre_patch_ret, int, 0644);
+MODULE_PARM_DESC(pre_patch_ret, "pre_patch_ret (default=0)");
+
+static const char *const module_state[] = {
+	[MODULE_STATE_LIVE]	= "[MODULE_STATE_LIVE] Normal state",
+	[MODULE_STATE_COMING]	= "[MODULE_STATE_COMING] Full formed, running module_init",
+	[MODULE_STATE_GOING]	= "[MODULE_STATE_GOING] Going away",
+	[MODULE_STATE_UNFORMED]	= "[MODULE_STATE_UNFORMED] Still setting it up",
+};
+
+static void callback_info(const char *callback, struct klp_object *obj)
+{
+	if (obj->mod)
+		pr_info("%s: %s -> %s\n", callback, obj->mod->name,
+			module_state[obj->mod->state]);
+	else
+		pr_info("%s: vmlinux\n", callback);
+}
+
+/* Executed on object patching (ie, patch enablement) */
+static int pre_patch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+	return pre_patch_ret;
+}
+
+/* Executed on object unpatching (ie, patch disablement) */
+static void post_patch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+}
+
+/* Executed on object unpatching (ie, patch disablement) */
+static void pre_unpatch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+}
+
+/* Executed on object unpatching (ie, patch disablement) */
+static void post_unpatch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+}
+
+static void patched_work_func(struct work_struct *work)
+{
+	pr_info("%s\n", __func__);
+}
+
+static struct klp_func no_funcs[] = {
+	KLP_FUNC_END
+};
+
+static struct klp_func busymod_funcs[] = {
+	KLP_FUNC(busymod_work_func, patched_work_func),
+	KLP_FUNC_END
+};
+
+static struct klp_object objs[] = {
+	KLP_VMLINUX_CALLBACKS(no_funcs,
+			      pre_patch_callback,
+			      post_patch_callback,
+			      pre_unpatch_callback,
+			      post_unpatch_callback),
+	KLP_OBJECT_CALLBACKS(test_klp_callbacks_mod,
+			     no_funcs,
+			     pre_patch_callback,
+			     post_patch_callback,
+			     pre_unpatch_callback,
+			     post_unpatch_callback),
+	KLP_OBJECT_CALLBACKS(test_klp_callbacks_busy,
+			     busymod_funcs,
+			     pre_patch_callback,
+			     post_patch_callback,
+			     pre_unpatch_callback,
+			     post_unpatch_callback),
+	KLP_OBJECT_END
+};
+
+static struct klp_patch patch = {
+	.mod = THIS_MODULE,
+	.objs = objs,
+};
+
+static int test_klp_callbacks_demo_init(void)
+{
+	return klp_enable_patch(&patch);
+}
+
+static void test_klp_callbacks_demo_exit(void)
+{
+}
+
+module_init(test_klp_callbacks_demo_init);
+module_exit(test_klp_callbacks_demo_exit);
+MODULE_LICENSE("GPL");
+MODULE_INFO(livepatch, "Y");
+MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: livepatch demo");
diff --git a/lib/livepatch/test_klp_callbacks_demo2.c b/lib/livepatch/test_klp_callbacks_demo2.c
new file mode 100644
index 000000000000..4cca72912d42
--- /dev/null
+++ b/lib/livepatch/test_klp_callbacks_demo2.c
@@ -0,0 +1,89 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/livepatch.h>
+
+static int replace;
+module_param(replace, int, 0644);
+MODULE_PARM_DESC(replace, "replace (default=0)");
+
+static const char *const module_state[] = {
+	[MODULE_STATE_LIVE]	= "[MODULE_STATE_LIVE] Normal state",
+	[MODULE_STATE_COMING]	= "[MODULE_STATE_COMING] Full formed, running module_init",
+	[MODULE_STATE_GOING]	= "[MODULE_STATE_GOING] Going away",
+	[MODULE_STATE_UNFORMED]	= "[MODULE_STATE_UNFORMED] Still setting it up",
+};
+
+static void callback_info(const char *callback, struct klp_object *obj)
+{
+	if (obj->mod)
+		pr_info("%s: %s -> %s\n", callback, obj->mod->name,
+			module_state[obj->mod->state]);
+	else
+		pr_info("%s: vmlinux\n", callback);
+}
+
+/* Executed on object patching (ie, patch enablement) */
+static int pre_patch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+	return 0;
+}
+
+/* Executed on object unpatching (ie, patch disablement) */
+static void post_patch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+}
+
+/* Executed on object unpatching (ie, patch disablement) */
+static void pre_unpatch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+}
+
+/* Executed on object unpatching (ie, patch disablement) */
+static void post_unpatch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+}
+
+static struct klp_func no_funcs[] = {
+	KLP_FUNC_END
+};
+
+static struct klp_object objs[] = {
+	KLP_VMLINUX_CALLBACKS(no_funcs,
+			      pre_patch_callback,
+			      post_patch_callback,
+			      pre_unpatch_callback,
+			      post_unpatch_callback),
+	KLP_OBJECT_END
+};
+
+static struct klp_patch patch = {
+	.mod = THIS_MODULE,
+	.objs = objs,
+	/* set .replace in the init function below for demo purposes */
+};
+
+static int test_klp_callbacks_demo2_init(void)
+{
+	patch.replace = replace;
+	return klp_enable_patch(&patch);
+}
+
+static void test_klp_callbacks_demo2_exit(void)
+{
+}
+
+module_init(test_klp_callbacks_demo2_init);
+module_exit(test_klp_callbacks_demo2_exit);
+MODULE_LICENSE("GPL");
+MODULE_INFO(livepatch, "Y");
+MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: livepatch demo2");
diff --git a/lib/livepatch/test_klp_callbacks_mod.c b/lib/livepatch/test_klp_callbacks_mod.c
new file mode 100644
index 000000000000..8fbe645b1c2c
--- /dev/null
+++ b/lib/livepatch/test_klp_callbacks_mod.c
@@ -0,0 +1,24 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+static int test_klp_callbacks_mod_init(void)
+{
+	pr_info("%s\n", __func__);
+	return 0;
+}
+
+static void test_klp_callbacks_mod_exit(void)
+{
+	pr_info("%s\n", __func__);
+}
+
+module_init(test_klp_callbacks_mod_init);
+module_exit(test_klp_callbacks_mod_exit);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: target module");
diff --git a/lib/livepatch/test_klp_livepatch.c b/lib/livepatch/test_klp_livepatch.c
new file mode 100644
index 000000000000..480d762fab97
--- /dev/null
+++ b/lib/livepatch/test_klp_livepatch.c
@@ -0,0 +1,47 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2014 Seth Jennings <sjenning@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/livepatch.h>
+
+#include <linux/seq_file.h>
+static int livepatch_cmdline_proc_show(struct seq_file *m, void *v)
+{
+	seq_printf(m, "%s: %s\n", THIS_MODULE->name,
+		   "this has been live patched");
+	return 0;
+}
+
+static struct klp_func funcs[] = {
+	KLP_FUNC(cmdline_proc_show, livepatch_cmdline_proc_show),
+	KLP_FUNC_END
+};
+
+static struct klp_object objs[] = {
+	KLP_VMLINUX(funcs),
+	KLP_OBJECT_END
+};
+
+static struct klp_patch patch = {
+	.mod = THIS_MODULE,
+	.objs = objs,
+};
+
+static int test_klp_livepatch_init(void)
+{
+	return klp_enable_patch(&patch);
+}
+
+static void test_klp_livepatch_exit(void)
+{
+}
+
+module_init(test_klp_livepatch_init);
+module_exit(test_klp_livepatch_exit);
+MODULE_LICENSE("GPL");
+MODULE_INFO(livepatch, "Y");
+MODULE_AUTHOR("Seth Jennings <sjenning@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: livepatch module");
diff --git a/lib/livepatch/test_klp_shadow_vars.c b/lib/livepatch/test_klp_shadow_vars.c
new file mode 100644
index 000000000000..02f892f941dc
--- /dev/null
+++ b/lib/livepatch/test_klp_shadow_vars.c
@@ -0,0 +1,236 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/livepatch.h>
+#include <linux/slab.h>
+
+/*
+ * Keep a small list of pointers so that we can print address-agnostic
+ * pointer values.  Use a rolling integer count to differentiate the values.
+ * Ironically we could have used the shadow variable API to do this, but
+ * let's not lean too heavily on the very code we're testing.
+ */
+static LIST_HEAD(ptr_list);
+struct shadow_ptr {
+	void *ptr;
+	int id;
+	struct list_head list;
+};
+
+static void free_ptr_list(void)
+{
+	struct shadow_ptr *sp, *tmp_sp;
+
+	list_for_each_entry_safe(sp, tmp_sp, &ptr_list, list) {
+		list_del(&sp->list);
+		kfree(sp);
+	}
+}
+
+static int ptr_id(void *ptr)
+{
+	struct shadow_ptr *sp;
+	static int count;
+
+	list_for_each_entry(sp, &ptr_list, list) {
+		if (sp->ptr == ptr)
+			return sp->id;
+	}
+
+	sp = kmalloc(sizeof(*sp), GFP_ATOMIC);
+	if (!sp)
+		return -1;
+	sp->ptr = ptr;
+	sp->id = count++;
+
+	list_add(&sp->list, &ptr_list);
+
+	return sp->id;
+}
+
+/*
+ * Shadow variable wrapper functions that echo the function and arguments
+ * to the kernel log for testing verification.  Don't display raw pointers,
+ * but use the ptr_id() value instead.
+ */
+static void *shadow_get(void *obj, unsigned long id)
+{
+	void *ret = klp_shadow_get(obj, id);
+
+	pr_info("klp_%s(obj=PTR%d, id=0x%lx) = PTR%d\n",
+		__func__, ptr_id(obj), id, ptr_id(ret));
+
+	return ret;
+}
+
+static void *shadow_alloc(void *obj, unsigned long id, size_t size,
+			  gfp_t gfp_flags, klp_shadow_ctor_t ctor,
+			  void *ctor_data)
+{
+	void *ret = klp_shadow_alloc(obj, id, size, gfp_flags, ctor,
+				     ctor_data);
+	pr_info("klp_%s(obj=PTR%d, id=0x%lx, size=%zx, gfp_flags=%pGg), ctor=PTR%d, ctor_data=PTR%d = PTR%d\n",
+		__func__, ptr_id(obj), id, size, &gfp_flags, ptr_id(ctor),
+		ptr_id(ctor_data), ptr_id(ret));
+	return ret;
+}
+
+static void *shadow_get_or_alloc(void *obj, unsigned long id, size_t size,
+				 gfp_t gfp_flags, klp_shadow_ctor_t ctor,
+				 void *ctor_data)
+{
+	void *ret = klp_shadow_get_or_alloc(obj, id, size, gfp_flags, ctor,
+					    ctor_data);
+	pr_info("klp_%s(obj=PTR%d, id=0x%lx, size=%zx, gfp_flags=%pGg), ctor=PTR%d, ctor_data=PTR%d = PTR%d\n",
+		__func__, ptr_id(obj), id, size, &gfp_flags, ptr_id(ctor),
+		ptr_id(ctor_data), ptr_id(ret));
+	return ret;
+}
+
+static void shadow_free(void *obj, unsigned long id, klp_shadow_dtor_t dtor)
+{
+	klp_shadow_free(obj, id, dtor);
+	pr_info("klp_%s(obj=PTR%d, id=0x%lx, dtor=PTR%d)\n",
+		__func__, ptr_id(obj), id, ptr_id(dtor));
+}
+
+static void shadow_free_all(unsigned long id, klp_shadow_dtor_t dtor)
+{
+	klp_shadow_free_all(id, dtor);
+	pr_info("klp_%s(id=0x%lx, dtor=PTR%d)\n",
+		__func__, id, ptr_id(dtor));
+}
+
+
+/* Shadow variable constructor - remember simple pointer data */
+static int shadow_ctor(void *obj, void *shadow_data, void *ctor_data)
+{
+	int **shadow_int = shadow_data;
+	*shadow_int = ctor_data;
+	pr_info("%s: PTR%d -> PTR%d\n",
+		__func__, ptr_id(shadow_int), ptr_id(ctor_data));
+
+	return 0;
+}
+
+static void shadow_dtor(void *obj, void *shadow_data)
+{
+	pr_info("%s(obj=PTR%d, shadow_data=PTR%d)\n",
+		__func__, ptr_id(obj), ptr_id(shadow_data));
+}
+
+static int test_klp_shadow_vars_init(void)
+{
+	void *obj			= THIS_MODULE;
+	int id			= 0x1234;
+	size_t size		= sizeof(int *);
+	gfp_t gfp_flags		= GFP_KERNEL;
+
+	int var1, var2, var3, var4;
+	int **sv1, **sv2, **sv3, **sv4;
+
+	void *ret;
+
+	ptr_id(NULL);
+	ptr_id(&var1);
+	ptr_id(&var2);
+	ptr_id(&var3);
+	ptr_id(&var4);
+
+	/*
+	 * With an empty shadow variable hash table, expect not to find
+	 * any matches.
+	 */
+	ret = shadow_get(obj, id);
+	if (!ret)
+		pr_info("  got expected NULL result\n");
+
+	/*
+	 * Allocate a few shadow variables with different <obj> and <id>.
+	 */
+	sv1 = shadow_alloc(obj, id, size, gfp_flags, shadow_ctor, &var1);
+	sv2 = shadow_alloc(obj + 1, id, size, gfp_flags, shadow_ctor, &var2);
+	sv3 = shadow_alloc(obj, id + 1, size, gfp_flags, shadow_ctor, &var3);
+
+	/*
+	 * Verify we can find our new shadow variables and that they point
+	 * to expected data.
+	 */
+	ret = shadow_get(obj, id);
+	if (ret == sv1 && *sv1 == &var1)
+		pr_info("  got expected PTR%d -> PTR%d result\n",
+			ptr_id(sv1), ptr_id(*sv1));
+	ret = shadow_get(obj + 1, id);
+	if (ret == sv2 && *sv2 == &var2)
+		pr_info("  got expected PTR%d -> PTR%d result\n",
+			ptr_id(sv2), ptr_id(*sv2));
+	ret = shadow_get(obj, id + 1);
+	if (ret == sv3 && *sv3 == &var3)
+		pr_info("  got expected PTR%d -> PTR%d result\n",
+			ptr_id(sv3), ptr_id(*sv3));
+
+	/*
+	 * Allocate or get a few more, this time with the same <obj>, <id>.
+	 * The second invocation should return the same shadow var.
+	 */
+	sv4 = shadow_get_or_alloc(obj + 2, id, size, gfp_flags, shadow_ctor, &var4);
+	ret = shadow_get_or_alloc(obj + 2, id, size, gfp_flags, shadow_ctor, &var4);
+	if (ret == sv4 && *sv4 == &var4)
+		pr_info("  got expected PTR%d -> PTR%d result\n",
+			ptr_id(sv4), ptr_id(*sv4));
+
+	/*
+	 * Free the <obj=*, id> shadow variables and check that we can no
+	 * longer find them.
+	 */
+	shadow_free(obj, id, shadow_dtor);			/* sv1 */
+	ret = shadow_get(obj, id);
+	if (!ret)
+		pr_info("  got expected NULL result\n");
+
+	shadow_free(obj + 1, id, shadow_dtor);			/* sv2 */
+	ret = shadow_get(obj + 1, id);
+	if (!ret)
+		pr_info("  got expected NULL result\n");
+
+	shadow_free(obj + 2, id, shadow_dtor);			/* sv4 */
+	ret = shadow_get(obj + 2, id);
+	if (!ret)
+		pr_info("  got expected NULL result\n");
+
+	/*
+	 * We should still find an <id+1> variable.
+	 */
+	ret = shadow_get(obj, id + 1);
+	if (ret == sv3 && *sv3 == &var3)
+		pr_info("  got expected PTR%d -> PTR%d result\n",
+			ptr_id(sv3), ptr_id(*sv3));
+
+	/*
+	 * Free all the <id+1> variables, too.
+	 */
+	shadow_free_all(id + 1, shadow_dtor);			/* sv3 */
+	ret = shadow_get(obj, id);
+	if (!ret)
+		pr_info("  shadow_get() got expected NULL result\n");
+
+
+	free_ptr_list();
+
+	return 0;
+}
+
+static void test_klp_shadow_vars_exit(void)
+{
+}
+
+module_init(test_klp_shadow_vars_init);
+module_exit(test_klp_shadow_vars_exit);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: shadow variables");
diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
index f1fe492c8e17..f2f96cc02ef5 100644
--- a/tools/testing/selftests/Makefile
+++ b/tools/testing/selftests/Makefile
@@ -18,6 +18,7 @@ TARGETS += ipc
 TARGETS += kcmp
 TARGETS += kvm
 TARGETS += lib
+TARGETS += livepatch
 TARGETS += membarrier
 TARGETS += memfd
 TARGETS += memory-hotplug
diff --git a/tools/testing/selftests/livepatch/Makefile b/tools/testing/selftests/livepatch/Makefile
new file mode 100644
index 000000000000..af4aee79bebb
--- /dev/null
+++ b/tools/testing/selftests/livepatch/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0
+
+TEST_GEN_PROGS := \
+	test-livepatch.sh \
+	test-callbacks.sh \
+	test-shadow-vars.sh
+
+include ../lib.mk
diff --git a/tools/testing/selftests/livepatch/README b/tools/testing/selftests/livepatch/README
new file mode 100644
index 000000000000..b73cd0e2dd51
--- /dev/null
+++ b/tools/testing/selftests/livepatch/README
@@ -0,0 +1,43 @@
+====================
+Livepatch Self Tests
+====================
+
+This is a small set of sanity tests for the kernel livepatching.
+
+The test suite loads and unloads several test kernel modules to verify
+livepatch behavior.  Debug information is logged to the kernel's message
+buffer and parsed for expected messages.  (Note: the tests will clear
+the message buffer between individual tests.)
+
+
+Config
+------
+
+Set these config options and their prerequisites:
+
+CONFIG_LIVEPATCH=y
+CONFIG_TEST_LIVEPATCH=m
+
+
+Running the tests
+-----------------
+
+Test kernel modules are built as part of lib/ (make modules) and need to
+be installed (make modules_install) as the test scripts will modprobe
+them.
+
+To run the livepatch selftests, from the top of the kernel source tree:
+
+  % make -C tools/testing/selftests TARGETS=livepatch run_tests
+
+
+Adding tests
+------------
+
+See the common functions.sh file for the existing collection of utility
+functions, most importantly set_dynamic_debug() and check_result().  The
+latter function greps the kernel's ring buffer for "livepatch:" and
+"test_klp" strings, so tests be sure to include one of those strings for
+result comparison.  Other utility functions include general module
+loading and livepatch loading helpers (waiting for patch transitions,
+sysfs entries, etc.)
diff --git a/tools/testing/selftests/livepatch/config b/tools/testing/selftests/livepatch/config
new file mode 100644
index 000000000000..0dd7700464a8
--- /dev/null
+++ b/tools/testing/selftests/livepatch/config
@@ -0,0 +1 @@
+CONFIG_TEST_LIVEPATCH=m
diff --git a/tools/testing/selftests/livepatch/functions.sh b/tools/testing/selftests/livepatch/functions.sh
new file mode 100644
index 000000000000..d448b115f06c
--- /dev/null
+++ b/tools/testing/selftests/livepatch/functions.sh
@@ -0,0 +1,203 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+# Shell functions for the rest of the scripts.
+
+MAX_RETRIES=600
+RETRY_INTERVAL=".1"	# seconds
+
+# log(msg) - write message to kernel log
+#	msg - insightful words
+function log() {
+	echo "$1" > /dev/kmsg
+}
+
+# die(msg) - game over, man
+#	msg - dying words
+function die() {
+	log "ERROR: $1"
+	echo "ERROR: $1" >&2
+	exit 1
+}
+
+# set_dynamic_debug() - setup kernel dynamic debug
+#	TODO - push and pop this config?
+function set_dynamic_debug() {
+	cat << EOF > /sys/kernel/debug/dynamic_debug/control
+file kernel/livepatch/* +p
+func klp_try_switch_task -p
+EOF
+}
+
+# loop_until(cmd) - loop a command until it is successful or $MAX_RETRIES,
+#		    sleep $RETRY_INTERVAL between attempts
+#	cmd - command and its arguments to run
+function loop_until() {
+	local cmd="$*"
+	local i=0
+	while true; do
+		eval "$cmd" && return 0
+		[[ $((i++)) -eq $MAX_RETRIES ]] && return 1
+		sleep $RETRY_INTERVAL
+	done
+}
+
+function is_livepatch_mod() {
+	local mod="$1"
+
+	if [[ $(modinfo "$mod" | awk '/^livepatch:/{print $NF}') == "Y" ]]; then
+		return 0
+	fi
+
+	return 1
+}
+
+function __load_mod() {
+	local mod="$1"; shift
+	local args="$*"
+
+	local msg="% modprobe $mod $args"
+	log "${msg%% }"
+	ret=$(modprobe "$mod" "$args" 2>&1)
+	if [[ "$ret" != "" ]]; then
+		die "$ret"
+	fi
+
+	# Wait for module in sysfs ...
+	loop_until '[[ -e "/sys/module/$mod" ]]' ||
+		die "failed to load module $mod"
+}
+
+
+# load_mod(modname, params) - load a kernel module
+#	modname - module name to load
+#	params  - module parameters to pass to modprobe
+function load_mod() {
+	local mod="$1"; shift
+	local args="$*"
+
+	is_livepatch_mod "$mod" &&
+		die "use load_lp() to load the livepatch module $mod"
+
+	__load_mod "$mod" "$args"
+}
+
+# load_lp_nowait(modname, params) - load a kernel module with a livepatch
+#			but do not wait on until the transition finishes
+#	modname - module name to load
+#	params  - module parameters to pass to modprobe
+function load_lp_nowait() {
+	local mod="$1"; shift
+	local args="$*"
+
+	is_livepatch_mod "$mod" ||
+		die "module $mod is not a livepatch"
+
+	__load_mod "$mod" "$args"
+
+	# Wait for livepatch in sysfs ...
+	loop_until '[[ -e "/sys/kernel/livepatch/$mod" ]]' ||
+		die "failed to load module $mod (sysfs)"
+}
+
+# load_lp(modname, params) - load a kernel module with a livepatch
+#	modname - module name to load
+#	params  - module parameters to pass to modprobe
+function load_lp() {
+	local mod="$1"; shift
+	local args="$*"
+
+	load_lp_nowait "$mod" "$args"
+
+	# Wait until the transition finishes ...
+	loop_until 'grep -q '^0$' /sys/kernel/livepatch/$mod/transition' ||
+		die "failed to complete transition"
+}
+
+# load_failing_mod(modname, params) - load a kernel module, expect to fail
+#	modname - module name to load
+#	params  - module parameters to pass to modprobe
+function load_failing_mod() {
+	local mod="$1"; shift
+	local args="$*"
+
+	local msg="% modprobe $mod $args"
+	log "${msg%% }"
+	ret=$(modprobe "$mod" "$args" 2>&1)
+	if [[ "$ret" == "" ]]; then
+		die "$mod unexpectedly loaded"
+	fi
+	log "$ret"
+}
+
+# unload_mod(modname) - unload a kernel module
+#	modname - module name to unload
+function unload_mod() {
+	local mod="$1"
+
+	# Wait for module reference count to clear ...
+	loop_until '[[ $(cat "/sys/module/$mod/refcnt") == "0" ]]' ||
+		die "failed to unload module $mod (refcnt)"
+
+	log "% rmmod $mod"
+	ret=$(rmmod "$mod" 2>&1)
+	if [[ "$ret" != "" ]]; then
+		die "$ret"
+	fi
+
+	# Wait for module in sysfs ...
+	loop_until '[[ ! -e "/sys/module/$mod" ]]' ||
+		die "failed to unload module $mod (/sys/module)"
+}
+
+# unload_lp(modname) - unload a kernel module with a livepatch
+#	modname - module name to unload
+function unload_lp() {
+	unload_mod "$1"
+}
+
+# disable_lp(modname) - disable a livepatch
+#	modname - module name to unload
+function disable_lp() {
+	local mod="$1"
+
+	log "% echo 0 > /sys/kernel/livepatch/$mod/enabled"
+	echo 0 > /sys/kernel/livepatch/"$mod"/enabled
+
+ 	# Wait until the transition finishes and the livepatch gets
+	# removed from sysfs...
+	loop_until '[[ ! -e "/sys/kernel/livepatch/$mod" ]]' ||
+		die "failed to disable livepatch $mod"
+}
+
+# set_pre_patch_ret(modname, pre_patch_ret)
+#	modname - module name to set
+#	pre_patch_ret - new pre_patch_ret value
+function set_pre_patch_ret {
+	local mod="$1"; shift
+	local ret="$1"
+
+	log "% echo $ret > /sys/module/$mod/parameters/pre_patch_ret"
+	echo "$ret" > /sys/module/"$mod"/parameters/pre_patch_ret
+
+	# Wait for sysfs value to hold ...
+	loop_until '[[ $(cat "/sys/module/$mod/parameters/pre_patch_ret") == "$ret" ]]' ||
+		die "failed to set pre_patch_ret parameter for $mod module"
+}
+
+# check_result() - verify dmesg output
+#	TODO - better filter, out of order msgs, etc?
+function check_result {
+	local expect="$*"
+	local result
+
+	result=$(dmesg | grep -v 'tainting' | grep -e 'livepatch:' -e 'test_klp' | sed 's/^\[[ 0-9.]*\] //')
+
+	if [[ "$expect" == "$result" ]] ; then
+		echo "ok"
+	else
+		echo -e "not ok\n\n$(diff -upr --label expected --label result <(echo "$expect") <(echo "$result"))\n"
+		die "livepatch kselftest(s) failed"
+	fi
+}
diff --git a/tools/testing/selftests/livepatch/test-callbacks.sh b/tools/testing/selftests/livepatch/test-callbacks.sh
new file mode 100755
index 000000000000..4b445c1d3c0b
--- /dev/null
+++ b/tools/testing/selftests/livepatch/test-callbacks.sh
@@ -0,0 +1,587 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+. $(dirname $0)/functions.sh
+
+MOD_LIVEPATCH=test_klp_callbacks_demo
+MOD_LIVEPATCH2=test_klp_callbacks_demo2
+MOD_TARGET=test_klp_callbacks_mod
+MOD_TARGET_BUSY=test_klp_callbacks_busy
+
+set_dynamic_debug
+
+
+# TEST: target module before livepatch
+#
+# Test a combination of loading a kernel module and a livepatch that
+# patches a function in the first module.  Load the target module
+# before the livepatch module.  Unload them in the same order.
+#
+# - On livepatch enable, before the livepatch transition starts,
+#   pre-patch callbacks are executed for vmlinux and $MOD_TARGET (those
+#   klp_objects currently loaded).  After klp_objects are patched
+#   according to the klp_patch, their post-patch callbacks run and the
+#   transition completes.
+#
+# - Similarly, on livepatch disable, pre-patch callbacks run before the
+#   unpatching transition starts.  klp_objects are reverted, post-patch
+#   callbacks execute and the transition completes.
+
+echo -n "TEST: target module before livepatch ... "
+dmesg -C
+
+load_mod $MOD_TARGET
+load_lp $MOD_LIVEPATCH
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+unload_mod $MOD_TARGET
+
+check_result "% modprobe $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_init
+% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit"
+
+
+# TEST: module_coming notifier
+#
+# This test is similar to the previous test, but (un)load the livepatch
+# module before the target kernel module.  This tests the livepatch
+# core's module_coming handler.
+#
+# - On livepatch enable, only pre/post-patch callbacks are executed for
+#   currently loaded klp_objects, in this case, vmlinux.
+#
+# - When a targeted module is subsequently loaded, only its
+#   pre/post-patch callbacks are executed.
+#
+# - On livepatch disable, all currently loaded klp_objects' (vmlinux and
+#   $MOD_TARGET) pre/post-unpatch callbacks are executed.
+
+echo -n "TEST: module_coming notifier ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+load_mod $MOD_TARGET
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+unload_mod $MOD_TARGET
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% modprobe $MOD_TARGET
+livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET'
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_TARGET: ${MOD_TARGET}_init
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit"
+
+
+# TEST: module_going notifier
+#
+# Test loading the livepatch after a targeted kernel module, then unload
+# the kernel module before disabling the livepatch.  This tests the
+# livepatch core's module_going handler.
+#
+# - First load a target module, then the livepatch.
+#
+# - When a target module is unloaded, the livepatch is only reverted
+#   from that klp_object ($MOD_TARGET).  As such, only its pre and
+#   post-unpatch callbacks are executed when this occurs.
+#
+# - When the livepatch is disabled, pre and post-unpatch callbacks are
+#   run for the remaining klp_object, vmlinux.
+
+echo -n "TEST: module_going notifier ... "
+dmesg -C
+
+load_mod $MOD_TARGET
+load_lp $MOD_LIVEPATCH
+unload_mod $MOD_TARGET
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+
+check_result "% modprobe $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_init
+% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit
+$MOD_LIVEPATCH: pre_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+livepatch: reverting patch '$MOD_LIVEPATCH' on unloading module '$MOD_TARGET'
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: module_coming and module_going notifiers
+#
+# This test is similar to the previous test, however the livepatch is
+# loaded first.  This tests the livepatch core's module_coming and
+# module_going handlers.
+#
+# - First load the livepatch.
+#
+# - When a targeted kernel module is subsequently loaded, only its
+#   pre/post-patch callbacks are executed.
+#
+# - When the target module is unloaded, the livepatch is only reverted
+#   from the $MOD_TARGET klp_object.  As such, only pre and
+#   post-unpatch callbacks are executed when this occurs.
+
+echo -n "TEST: module_coming and module_going notifiers ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+load_mod $MOD_TARGET
+unload_mod $MOD_TARGET
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% modprobe $MOD_TARGET
+livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET'
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_TARGET: ${MOD_TARGET}_init
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit
+$MOD_LIVEPATCH: pre_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+livepatch: reverting patch '$MOD_LIVEPATCH' on unloading module '$MOD_TARGET'
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: target module not present
+#
+# A simple test of loading a livepatch without one of its patch target
+# klp_objects ever loaded ($MOD_TARGET).
+#
+# - Load the livepatch.
+#
+# - As expected, only pre/post-(un)patch handlers are executed for
+#   vmlinux.
+
+echo -n "TEST: target module not present ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: pre-patch callback -ENODEV
+#
+# Test a scenario where a vmlinux pre-patch callback returns a non-zero
+# status (ie, failure).
+#
+# - First load a target module.
+#
+# - Load the livepatch module, setting its 'pre_patch_ret' value to -19
+#   (-ENODEV).  When its vmlinux pre-patch callback executes, this
+#   status code will propagate back to the module-loading subsystem.
+#   The result is that the insmod command refuses to load the livepatch
+#   module.
+
+echo -n "TEST: pre-patch callback -ENODEV ... "
+dmesg -C
+
+load_mod $MOD_TARGET
+load_failing_mod $MOD_LIVEPATCH pre_patch_ret=-19
+unload_mod $MOD_TARGET
+
+check_result "% modprobe $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_init
+% modprobe $MOD_LIVEPATCH pre_patch_ret=-19
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+livepatch: pre-patch callback failed for object '$MOD_TARGET'
+livepatch: failed to enable patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': canceling patching transition, going to unpatch
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+modprobe: ERROR: could not insert '$MOD_LIVEPATCH': No such device
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit"
+
+
+# TEST: module_coming + pre-patch callback -ENODEV
+#
+# Similar to the previous test, setup a livepatch such that its vmlinux
+# pre-patch callback returns success.  However, when a targeted kernel
+# module is later loaded, have the livepatch return a failing status
+# code.
+#
+# - Load the livepatch, vmlinux pre-patch callback succeeds.
+#
+# - Set a trap so subsequent pre-patch callbacks to this livepatch will
+#   return -ENODEV.
+#
+# - The livepatch pre-patch callback for subsequently loaded target
+#   modules will return failure, so the module loader refuses to load
+#   the kernel module.  No post-patch or pre/post-unpatch callbacks are
+#   executed for this klp_object.
+#
+# - Pre/post-unpatch callbacks are run for the vmlinux klp_object.
+
+echo -n "TEST: module_coming + pre-patch callback -ENODEV ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+set_pre_patch_ret $MOD_LIVEPATCH -19
+load_failing_mod $MOD_TARGET
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% echo -19 > /sys/module/$MOD_LIVEPATCH/parameters/pre_patch_ret
+% modprobe $MOD_TARGET
+livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET'
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+livepatch: pre-patch callback failed for object '$MOD_TARGET'
+livepatch: patch '$MOD_LIVEPATCH' failed for module '$MOD_TARGET', refusing to load module '$MOD_TARGET'
+modprobe: ERROR: could not insert '$MOD_TARGET': No such device
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: multiple target modules
+#
+# Test loading multiple targeted kernel modules.  This test-case is
+# mainly for comparing with the next test-case.
+#
+# - Load a target "busy" kernel module which kicks off a worker function
+#   that immediately exits.
+#
+# - Proceed with loading the livepatch and another ordinary target
+#   module.  Post-patch callbacks are executed and the transition
+#   completes quickly.
+
+echo -n "TEST: multiple target modules ... "
+dmesg -C
+
+load_mod $MOD_TARGET_BUSY sleep_secs=0
+# give $MOD_TARGET_BUSY::busymod_work_func() a chance to run
+sleep 5
+load_lp $MOD_LIVEPATCH
+load_mod $MOD_TARGET
+unload_mod $MOD_TARGET
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+unload_mod $MOD_TARGET_BUSY
+
+check_result "% modprobe $MOD_TARGET_BUSY sleep_secs=0
+$MOD_TARGET_BUSY: ${MOD_TARGET_BUSY}_init
+$MOD_TARGET_BUSY: busymod_work_func, sleeping 0 seconds ...
+$MOD_TARGET_BUSY: busymod_work_func exit
+% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% modprobe $MOD_TARGET
+livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET'
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_TARGET: ${MOD_TARGET}_init
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit
+$MOD_LIVEPATCH: pre_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+livepatch: reverting patch '$MOD_LIVEPATCH' on unloading module '$MOD_TARGET'
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH
+% rmmod $MOD_TARGET_BUSY
+$MOD_TARGET_BUSY: ${MOD_TARGET_BUSY}_exit"
+
+
+
+# TEST: busy target module
+#
+# A similar test as the previous one, but force the "busy" kernel module
+# to do longer work.
+#
+# The livepatching core will refuse to patch a task that is currently
+# executing a to-be-patched function -- the consistency model stalls the
+# current patch transition until this safety-check is met.  Test a
+# scenario where one of a livepatch's target klp_objects sits on such a
+# function for a long time.  Meanwhile, load and unload other target
+# kernel modules while the livepatch transition is in progress.
+#
+# - Load the "busy" kernel module, this time make it do 10 seconds worth
+#   of work.
+#
+# - Meanwhile, the livepatch is loaded.  Notice that the patch
+#   transition does not complete as the targeted "busy" module is
+#   sitting on a to-be-patched function.
+#
+# - Load a second target module (this one is an ordinary idle kernel
+#   module).  Note that *no* post-patch callbacks will be executed while
+#   the livepatch is still in transition.
+#
+# - Request an unload of the simple kernel module.  The patch is still
+#   transitioning, so its pre-unpatch callbacks are skipped.
+#
+# - Finally the livepatch is disabled.  Since none of the patch's
+#   klp_object's post-patch callbacks executed, the remaining
+#   klp_object's pre-unpatch callbacks are skipped.
+
+echo -n "TEST: busy target module ... "
+dmesg -C
+
+load_mod $MOD_TARGET_BUSY sleep_secs=10
+load_lp_nowait $MOD_LIVEPATCH
+# Don't wait for transition, load $MOD_TARGET while the transition
+# is still stalled in $MOD_TARGET_BUSY::busymod_work_func()
+sleep 5
+load_mod $MOD_TARGET
+unload_mod $MOD_TARGET
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+unload_mod $MOD_TARGET_BUSY
+
+check_result "% modprobe $MOD_TARGET_BUSY sleep_secs=10
+$MOD_TARGET_BUSY: ${MOD_TARGET_BUSY}_init
+$MOD_TARGET_BUSY: busymod_work_func, sleeping 10 seconds ...
+% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+% modprobe $MOD_TARGET
+livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET'
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_TARGET: ${MOD_TARGET}_init
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit
+livepatch: reverting patch '$MOD_LIVEPATCH' on unloading module '$MOD_TARGET'
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': reversing transition from patching to unpatching
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH
+% rmmod $MOD_TARGET_BUSY
+$MOD_TARGET_BUSY: busymod_work_func exit
+$MOD_TARGET_BUSY: ${MOD_TARGET_BUSY}_exit"
+
+
+# TEST: multiple livepatches
+#
+# Test loading multiple livepatches.  This test-case is mainly for comparing
+# with the next test-case.
+#
+# - Load and unload two livepatches, pre and post (un)patch callbacks
+#   execute as each patch progresses through its (un)patching
+#   transition.
+
+echo -n "TEST: multiple livepatches ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+load_lp $MOD_LIVEPATCH2
+disable_lp $MOD_LIVEPATCH2
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH2
+unload_lp $MOD_LIVEPATCH
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% modprobe $MOD_LIVEPATCH2
+livepatch: enabling patch '$MOD_LIVEPATCH2'
+livepatch: '$MOD_LIVEPATCH2': initializing patching transition
+$MOD_LIVEPATCH2: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': starting patching transition
+livepatch: '$MOD_LIVEPATCH2': completing patching transition
+$MOD_LIVEPATCH2: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': patching complete
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH2/enabled
+livepatch: '$MOD_LIVEPATCH2': initializing unpatching transition
+$MOD_LIVEPATCH2: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH2': completing unpatching transition
+$MOD_LIVEPATCH2: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': unpatching complete
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH2
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: atomic replace
+#
+# Load multiple livepatches, but the second as an 'atomic-replace'
+# patch.  When the latter laods, the original livepatch should be
+# disabled and *none* of its pre/post-unpatch callbacks executed.  On
+# the other hand, when the atomic-replace livepatch is disabled, its
+# pre/post-unpatch callbacks *should* be executed.
+#
+# - Load and unload two livepatches, the second of which has its
+#   .replace flag set true.
+#
+# - Pre and post patch callbacks are executed for both livepatches.
+#
+# - Once the atomic replace module is loaded, only its pre and post
+#   unpatch callbacks are executed.
+
+echo -n "TEST: atomic replace ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+load_lp $MOD_LIVEPATCH2 replace=1
+disable_lp $MOD_LIVEPATCH2
+unload_lp $MOD_LIVEPATCH2
+unload_lp $MOD_LIVEPATCH
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% modprobe $MOD_LIVEPATCH2 replace=1
+livepatch: enabling patch '$MOD_LIVEPATCH2'
+livepatch: '$MOD_LIVEPATCH2': initializing patching transition
+$MOD_LIVEPATCH2: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': starting patching transition
+livepatch: '$MOD_LIVEPATCH2': completing patching transition
+$MOD_LIVEPATCH2: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': patching complete
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH2/enabled
+livepatch: '$MOD_LIVEPATCH2': initializing unpatching transition
+$MOD_LIVEPATCH2: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH2': completing unpatching transition
+$MOD_LIVEPATCH2: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': unpatching complete
+% rmmod $MOD_LIVEPATCH2
+% rmmod $MOD_LIVEPATCH"
+
+
+exit 0
diff --git a/tools/testing/selftests/livepatch/test-livepatch.sh b/tools/testing/selftests/livepatch/test-livepatch.sh
new file mode 100755
index 000000000000..f05268aea859
--- /dev/null
+++ b/tools/testing/selftests/livepatch/test-livepatch.sh
@@ -0,0 +1,168 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+. $(dirname $0)/functions.sh
+
+MOD_LIVEPATCH=test_klp_livepatch
+MOD_REPLACE=test_klp_atomic_replace
+
+set_dynamic_debug
+
+
+# TEST: basic function patching
+# - load a livepatch that modifies the output from /proc/cmdline and
+#   verify correct behavior
+# - unload the livepatch and make sure the patch was removed
+
+echo -n "TEST: basic function patching ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+
+if [[ "$(cat /proc/cmdline)" != "$MOD_LIVEPATCH: this has been live patched" ]] ; then
+	echo -e "FAIL\n\n"
+	die "livepatch kselftest(s) failed"
+fi
+
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+
+if [[ "$(cat /proc/cmdline)" == "$MOD_LIVEPATCH: this has been live patched" ]] ; then
+	echo -e "FAIL\n\n"
+	die "livepatch kselftest(s) failed"
+fi
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+livepatch: '$MOD_LIVEPATCH': patching complete
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: multiple livepatches
+# - load a livepatch that modifies the output from /proc/cmdline and
+#   verify correct behavior
+# - load another livepatch and verify that both livepatches are active
+# - unload the second livepatch and verify that the first is still active
+# - unload the first livepatch and verify none are active
+
+echo -n "TEST: multiple livepatches ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+load_lp $MOD_REPLACE replace=0
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+disable_lp $MOD_REPLACE
+unload_lp $MOD_REPLACE
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+livepatch: '$MOD_LIVEPATCH': patching complete
+$MOD_LIVEPATCH: this has been live patched
+% modprobe $MOD_REPLACE replace=0
+livepatch: enabling patch '$MOD_REPLACE'
+livepatch: '$MOD_REPLACE': initializing patching transition
+livepatch: '$MOD_REPLACE': starting patching transition
+livepatch: '$MOD_REPLACE': completing patching transition
+livepatch: '$MOD_REPLACE': patching complete
+$MOD_LIVEPATCH: this has been live patched
+$MOD_REPLACE: this has been live patched
+% echo 0 > /sys/kernel/livepatch/$MOD_REPLACE/enabled
+livepatch: '$MOD_REPLACE': initializing unpatching transition
+livepatch: '$MOD_REPLACE': starting unpatching transition
+livepatch: '$MOD_REPLACE': completing unpatching transition
+livepatch: '$MOD_REPLACE': unpatching complete
+% rmmod $MOD_REPLACE
+$MOD_LIVEPATCH: this has been live patched
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: atomic replace livepatch
+# - load a livepatch that modifies the output from /proc/cmdline and
+#   verify correct behavior
+# - load an atomic replace livepatch and verify that only the second is active
+# - remove the first livepatch and verify that the atomic replace livepatch
+#   is still active
+# - remove the atomic replace livepatch and verify that none are active
+
+echo -n "TEST: atomic replace livepatch ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+load_lp $MOD_REPLACE replace=1
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+unload_lp $MOD_LIVEPATCH
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+disable_lp $MOD_REPLACE
+unload_lp $MOD_REPLACE
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+livepatch: '$MOD_LIVEPATCH': patching complete
+$MOD_LIVEPATCH: this has been live patched
+% modprobe $MOD_REPLACE replace=1
+livepatch: enabling patch '$MOD_REPLACE'
+livepatch: '$MOD_REPLACE': initializing patching transition
+livepatch: '$MOD_REPLACE': starting patching transition
+livepatch: '$MOD_REPLACE': completing patching transition
+livepatch: '$MOD_REPLACE': patching complete
+$MOD_REPLACE: this has been live patched
+% rmmod $MOD_LIVEPATCH
+$MOD_REPLACE: this has been live patched
+% echo 0 > /sys/kernel/livepatch/$MOD_REPLACE/enabled
+livepatch: '$MOD_REPLACE': initializing unpatching transition
+livepatch: '$MOD_REPLACE': starting unpatching transition
+livepatch: '$MOD_REPLACE': completing unpatching transition
+livepatch: '$MOD_REPLACE': unpatching complete
+% rmmod $MOD_REPLACE"
+
+
+exit 0
diff --git a/tools/testing/selftests/livepatch/test-shadow-vars.sh b/tools/testing/selftests/livepatch/test-shadow-vars.sh
new file mode 100755
index 000000000000..04a37831e204
--- /dev/null
+++ b/tools/testing/selftests/livepatch/test-shadow-vars.sh
@@ -0,0 +1,60 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+. $(dirname $0)/functions.sh
+
+MOD_TEST=test_klp_shadow_vars
+
+set_dynamic_debug
+
+
+# TEST: basic shadow variable API
+# - load a module that exercises the shadow variable API
+
+echo -n "TEST: basic shadow variable API ... "
+dmesg -C
+
+load_mod $MOD_TEST
+unload_mod $MOD_TEST
+
+check_result "% modprobe $MOD_TEST
+$MOD_TEST: klp_shadow_get(obj=PTR5, id=0x1234) = PTR0
+$MOD_TEST:   got expected NULL result
+$MOD_TEST: shadow_ctor: PTR6 -> PTR1
+$MOD_TEST: klp_shadow_alloc(obj=PTR5, id=0x1234, size=8, gfp_flags=GFP_KERNEL), ctor=PTR7, ctor_data=PTR1 = PTR6
+$MOD_TEST: shadow_ctor: PTR8 -> PTR2
+$MOD_TEST: klp_shadow_alloc(obj=PTR9, id=0x1234, size=8, gfp_flags=GFP_KERNEL), ctor=PTR7, ctor_data=PTR2 = PTR8
+$MOD_TEST: shadow_ctor: PTR10 -> PTR3
+$MOD_TEST: klp_shadow_alloc(obj=PTR5, id=0x1235, size=8, gfp_flags=GFP_KERNEL), ctor=PTR7, ctor_data=PTR3 = PTR10
+$MOD_TEST: klp_shadow_get(obj=PTR5, id=0x1234) = PTR6
+$MOD_TEST:   got expected PTR6 -> PTR1 result
+$MOD_TEST: klp_shadow_get(obj=PTR9, id=0x1234) = PTR8
+$MOD_TEST:   got expected PTR8 -> PTR2 result
+$MOD_TEST: klp_shadow_get(obj=PTR5, id=0x1235) = PTR10
+$MOD_TEST:   got expected PTR10 -> PTR3 result
+$MOD_TEST: shadow_ctor: PTR11 -> PTR4
+$MOD_TEST: klp_shadow_get_or_alloc(obj=PTR12, id=0x1234, size=8, gfp_flags=GFP_KERNEL), ctor=PTR7, ctor_data=PTR4 = PTR11
+$MOD_TEST: klp_shadow_get_or_alloc(obj=PTR12, id=0x1234, size=8, gfp_flags=GFP_KERNEL), ctor=PTR7, ctor_data=PTR4 = PTR11
+$MOD_TEST:   got expected PTR11 -> PTR4 result
+$MOD_TEST: shadow_dtor(obj=PTR5, shadow_data=PTR6)
+$MOD_TEST: klp_shadow_free(obj=PTR5, id=0x1234, dtor=PTR13)
+$MOD_TEST: klp_shadow_get(obj=PTR5, id=0x1234) = PTR0
+$MOD_TEST:   got expected NULL result
+$MOD_TEST: shadow_dtor(obj=PTR9, shadow_data=PTR8)
+$MOD_TEST: klp_shadow_free(obj=PTR9, id=0x1234, dtor=PTR13)
+$MOD_TEST: klp_shadow_get(obj=PTR9, id=0x1234) = PTR0
+$MOD_TEST:   got expected NULL result
+$MOD_TEST: shadow_dtor(obj=PTR12, shadow_data=PTR11)
+$MOD_TEST: klp_shadow_free(obj=PTR12, id=0x1234, dtor=PTR13)
+$MOD_TEST: klp_shadow_get(obj=PTR12, id=0x1234) = PTR0
+$MOD_TEST:   got expected NULL result
+$MOD_TEST: klp_shadow_get(obj=PTR5, id=0x1235) = PTR10
+$MOD_TEST:   got expected PTR10 -> PTR3 result
+$MOD_TEST: shadow_dtor(obj=PTR5, shadow_data=PTR10)
+$MOD_TEST: klp_shadow_free_all(id=0x1235, dtor=PTR13)
+$MOD_TEST: klp_shadow_get(obj=PTR5, id=0x1234) = PTR0
+$MOD_TEST:   shadow_get() got expected NULL result
+% rmmod test_klp_shadow_vars"
+
+exit 0
-- 
2.13.7


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 00/12]
  2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
                   ` (11 preceding siblings ...)
  2018-08-28 14:36 ` [PATCH v12 12/12] selftests/livepatch: introduce tests Petr Mladek
@ 2018-08-30 11:58 ` Miroslav Benes
  2018-10-11 12:48   ` Petr Mladek
  12 siblings, 1 reply; 34+ messages in thread
From: Miroslav Benes @ 2018-08-30 11:58 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Tue, 28 Aug 2018, Petr Mladek wrote:

> livepatch: Atomic replace feature
> 
> The atomic replace allows to create cumulative patches. They
> are useful when you maintain many livepatches and want to remove
> one that is lower on the stack. In addition it is very useful when
> more patches touch the same function and there are dependencies
> between them.
> 
> This version does another big refactoring based on feedback against
> v11[*]. In particular, it removes the registration step, changes
> the API and handling of livepatch dependencies. The aim is
> to keep the number of possible variants on a sane level.
> It helps the keep the feature "easy" to use and maintain.
> 
> [*] https://lkml.kernel.org/r/20180323120028.31451-1-pmladek@suse.com

Hi,

I've started to review the patch set. Running selftests with lockdep 
enabled gives me...

======================================================
WARNING: possible circular locking dependency detected
4.17.0-rc1-klp_replace_v12-117114-gfedb3eba611d #218 Tainted: G              
K  
------------------------------------------------------
kworker/1:1/49 is trying to acquire lock:
00000000bb88dc17 (kn->count#186){++++}, at: kernfs_remove+0x23/0x40

but task is already holding lock:
0000000073632424 (klp_mutex){+.+.}, at: klp_transition_work_fn+0x17/0x40

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (klp_mutex){+.+.}:
       lock_acquire+0xd4/0x220
       __mutex_lock+0x75/0x920
       mutex_lock_nested+0x1b/0x20
       enabled_store+0x47/0x150
       kobj_attr_store+0x12/0x20
       sysfs_kf_write+0x4a/0x60
       kernfs_fop_write+0x123/0x1b0
       __vfs_write+0x2b/0x150
       vfs_write+0xc7/0x1c0
       ksys_write+0x49/0xa0
       __x64_sys_write+0x1a/0x20
       do_syscall_64+0x62/0x1b0
       entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #0 (kn->count#186){++++}:
       __lock_acquire+0xe9d/0x1240
       lock_acquire+0xd4/0x220
       __kernfs_remove+0x23c/0x2c0
       kernfs_remove+0x23/0x40
       sysfs_remove_dir+0x51/0x60
       kobject_del+0x18/0x50
       kobject_cleanup+0x4b/0x180
       kobject_put+0x2a/0x50
       __klp_free_patch+0x5b/0x60
       klp_free_patch_nowait+0x12/0x30
       klp_try_complete_transition+0x13e/0x180
       klp_transition_work_fn+0x26/0x40
       process_one_work+0x1d8/0x5d0
       worker_thread+0x4d/0x3d0
       kthread+0x113/0x150
       ret_from_fork+0x3a/0x50

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(klp_mutex);
                               lock(kn->count#186);
                               lock(klp_mutex);
  lock(kn->count#186);

 *** DEADLOCK ***

3 locks held by kworker/1:1/49: #0: 00000000654f4e5a ((wq_completion)"events"){+.+.}, at: 
process_one_work+0x153/0x5d0 #1: 000000003c1dc846 ((klp_transition_work).work){+.+.}, at: 
process_one_work+0x153/0x5d0 #2: 0000000073632424 (klp_mutex){+.+.}, at: klp_transition_work_fn+0x17/0x40

stack backtrace:
CPU: 1 PID: 49 Comm: kworker/1:1 Tainted: G              K   
4.17.0-rc1-klp_replace_v12-117114-gfedb3eba611d #218
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
1.0.0-prebuilt.qemu-project.org 04/01/2014
Workqueue: events klp_transition_work_fn
Call Trace:
 dump_stack+0x81/0xb8
 print_circular_bug.isra.39+0x200/0x20e
 check_prev_add.constprop.47+0x725/0x740
 ? print_shortest_lock_dependencies+0x1c0/0x1c0
 __lock_acquire+0xe9d/0x1240
 lock_acquire+0xd4/0x220
 ? kernfs_remove+0x23/0x40
 __kernfs_remove+0x23c/0x2c0
 ? kernfs_remove+0x23/0x40
 kernfs_remove+0x23/0x40
 sysfs_remove_dir+0x51/0x60
 kobject_del+0x18/0x50
 kobject_cleanup+0x4b/0x180
 kobject_put+0x2a/0x50
 __klp_free_patch+0x5b/0x60
 klp_free_patch_nowait+0x12/0x30
 klp_try_complete_transition+0x13e/0x180
 klp_transition_work_fn+0x26/0x40
 process_one_work+0x1d8/0x5d0
 ? process_one_work+0x153/0x5d0
 worker_thread+0x4d/0x3d0
 ? trace_hardirqs_on+0xd/0x10
 kthread+0x113/0x150
 ? process_one_work+0x5d0/0x5d0
 ? kthread_delayed_work_timer_fn+0x90/0x90
 ? kthread_delayed_work_timer_fn+0x90/0x90
 ret_from_fork+0x3a/0x50


I think it could be related to registration removal and API changes. One 
thread writes to sysfs and wants to take klp_mutex there (CPU#1), the 
other holds klp_mutex in a transition period and calls klp_free_patch() 
to remove the sysfs infrastructure.

Regards,
Miroslav

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 01/12] livepatch: Change void *new_func -> unsigned long new_addr in struct klp_func
  2018-08-28 14:35 ` [PATCH v12 01/12] livepatch: Change void *new_func -> unsigned long new_addr in struct klp_func Petr Mladek
@ 2018-08-31  8:37   ` Miroslav Benes
  0 siblings, 0 replies; 34+ messages in thread
From: Miroslav Benes @ 2018-08-31  8:37 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Tue, 28 Aug 2018, Petr Mladek wrote:

> The address of the to be patched function and new function is stored
> in struct klp_func as:
> 
> 	void *new_func;
> 	unsigned long old_addr;
> 
> The different naming scheme and type is derived from the way how
> the addresses are set. @old_addr is assigned at runtime using
> kallsyms-based search. @new_func is statically initialized,
> for example:
> 
>   static struct klp_func funcs[] = {
> 	{
> 		.old_name = "cmdline_proc_show",
> 		.new_func = livepatch_cmdline_proc_show,
> 	}, { }
>   };
> 
> This patch changes void *new_func -> unsigned long new_addr. It removes
> some confusion when these address are later used in the code. It is
> motivated by a followup patch that adds special NOP struct klp_func
> where we want to assign func->new_func = func->old_addr respectively
> func->new_addr = func->old_addr.
> 
> This patch does not modify the existing behavior.
> 
> IMPORTANT: This patch modifies ABI. The patches will need to use,
> for example:
> 
>   static struct klp_func funcs[] = {
> 	{
> 		.old_name = "cmdline_proc_show",
> 		.new_addr = (unsigned long)livepatch_cmdline_proc_show,
> 	}, { }
>   };
> 
> Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com>
> Signed-off-by: Petr Mladek <pmladek@suse.com>

I'm not convinced the patch makes things any better. The next patch 
slightly improves it, but still. Is new_func really such a problem?

Thanks,
Miroslav

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 03/12] livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code
  2018-08-28 14:35 ` [PATCH v12 03/12] livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code Petr Mladek
@ 2018-08-31  8:38   ` Miroslav Benes
  0 siblings, 0 replies; 34+ messages in thread
From: Miroslav Benes @ 2018-08-31  8:38 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Tue, 28 Aug 2018, Petr Mladek wrote:

> We are going to simplify the API and code by removing the registration
> step. This would require calling init/free functions from enable/disable
> ones.
> 
> This patch just moves the code the code to prevent more forward
> declarations.

s/the code the code/the code/
 
> This patch does not change the code except of two forward declarations.
> 
> Signed-off-by: Petr Mladek <pmladek@suse.com>

Miroslav

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 04/12] livepatch: Consolidate klp_free functions
  2018-08-28 14:35 ` [PATCH v12 04/12] livepatch: Consolidate klp_free functions Petr Mladek
@ 2018-08-31 10:39   ` Miroslav Benes
  2018-10-12 11:43     ` Petr Mladek
  0 siblings, 1 reply; 34+ messages in thread
From: Miroslav Benes @ 2018-08-31 10:39 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Tue, 28 Aug 2018, Petr Mladek wrote:

> The code for freeing livepatch structures is a bit scattered and tricky:
> 
>   + direct calls to klp_free_*_limited() and kobject_put() are
>     used to release partially initialized objects
> 
>   + klp_free_patch() removes the patch from the public list
>     and releases all objects except for patch->kobj
> 
>   + object_put(&patch->kobj) and the related wait_for_completion()
>     are called directly outside klp_mutex; this code is duplicated;
> 
> Now, we are going to remove the registration stage to simplify the API
> and the code. This would require handling more situations in
> klp_enable_patch() error paths.
> 
> More importantly, we are going to add a feature called atomic replace.
> It will need to dynamically create func and object structures. We will
> want to reuse the existing init() and free() functions. This would
> create even more error path scenarios.
> 
> This patch implements a more clever free functions:
> 
>   + checks kobj.state_initialized instead of @limit
> 
>   + initializes patch->list early so that the check for empty list
>     always works
> 
>   + The action(s) that has to be done outside klp_mutex are done
>     in separate klp_free_patch_end() function. It waits only
>     when patch->kobj was really released via the _begin() part.
> 
> Note that it is safe to put patch->kobj under klp_mutex. It calls
> the release callback only when the reference count reaches zero.
> Therefore it does not block any related sysfs operation that took
> a reference and might eventually wait for klp_mutex.

This seems to be the reason of the issue which lockdep reported. The patch 
moved kobject_put(&patch->kobj) under klp_mutex. Perhaps I cannot read 
kernfs code properly today, but I fail to understand why it is supposed to 
be safe.

Indeed, if it is safe, lockdep report is false positive.

> Note that __klp_free_patch() is split because it will be later
> used in a _nowait() variant. Also klp_free_patch_end() makes
> sense because it will later get more complicated.

There are no _begin() and _end() functions in the patch.
 
> This patch does not change the existing behavior.
> 
> Signed-off-by: Petr Mladek <pmladek@suse.com>
> Cc: Josh Poimboeuf <jpoimboe@redhat.com>
> Cc: Jessica Yu <jeyu@kernel.org>
> Cc: Jiri Kosina <jikos@kernel.org>
> Cc: Jason Baron <jbaron@akamai.com>
> Acked-by: Miroslav Benes <mbenes@suse.cz>

My Acked-by here is a bit premature.

> ---
>  include/linux/livepatch.h |  2 ++
>  kernel/livepatch/core.c   | 92 +++++++++++++++++++++++++++++------------------
>  2 files changed, 59 insertions(+), 35 deletions(-)
> 
> diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
> index 1163742b27c0..22e0767d64b0 100644
> --- a/include/linux/livepatch.h
> +++ b/include/linux/livepatch.h
> @@ -138,6 +138,7 @@ struct klp_object {
>   * @list:	list node for global list of registered patches
>   * @kobj:	kobject for sysfs resources
>   * @enabled:	the patch is enabled (but operation may be incomplete)
> + * @wait_free:	wait until the patch is freed
>   * @finish:	for waiting till it is safe to remove the patch module
>   */
>  struct klp_patch {
> @@ -149,6 +150,7 @@ struct klp_patch {
>  	struct list_head list;
>  	struct kobject kobj;
>  	bool enabled;
> +	bool wait_free;
>  	struct completion finish;
>  };
>  
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index b3956cce239e..3ca404545150 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -465,17 +465,15 @@ static struct kobj_type klp_ktype_func = {
>  	.sysfs_ops = &kobj_sysfs_ops,
>  };
>  
> -/*
> - * Free all functions' kobjects in the array up to some limit. When limit is
> - * NULL, all kobjects are freed.
> - */
> -static void klp_free_funcs_limited(struct klp_object *obj,
> -				   struct klp_func *limit)
> +static void klp_free_funcs(struct klp_object *obj)
>  {
>  	struct klp_func *func;
>  
> -	for (func = obj->funcs; func->old_name && func != limit; func++)
> -		kobject_put(&func->kobj);
> +	klp_for_each_func(obj, func) {
> +		/* Might be called from klp_init_patch() error path. */
> +		if (func->kobj.state_initialized)
> +			kobject_put(&func->kobj);
> +	}
>  }

Just for the record, it is a slightly suboptimal because now we iterate 
through the whole list. We could add break to else branch, I think, but 
it's not necessary.
  
>  /* Clean up when a patched object is unloaded */
> @@ -489,26 +487,59 @@ static void klp_free_object_loaded(struct klp_object *obj)
>  		func->old_addr = 0;
>  }
>  
> -/*
> - * Free all objects' kobjects in the array up to some limit. When limit is
> - * NULL, all kobjects are freed.
> - */
> -static void klp_free_objects_limited(struct klp_patch *patch,
> -				     struct klp_object *limit)
> +static void klp_free_objects(struct klp_patch *patch)
>  {
>  	struct klp_object *obj;
>  
> -	for (obj = patch->objs; obj->funcs && obj != limit; obj++) {
> -		klp_free_funcs_limited(obj, NULL);
> -		kobject_put(&obj->kobj);
> +	klp_for_each_object(patch, obj) {
> +		klp_free_funcs(obj);
> +
> +		/* Might be called from klp_init_patch() error path. */
> +		if (obj->kobj.state_initialized)
> +			kobject_put(&obj->kobj);
>  	}
>  }

Same here, of course.

Regards,
Miroslav

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 07/12] livepatch: Use lists to manage patches, objects and functions
  2018-08-28 14:35 ` [PATCH v12 07/12] livepatch: Use lists to manage patches, objects and functions Petr Mladek
@ 2018-09-03 16:00   ` Miroslav Benes
  2018-10-12 12:12     ` Petr Mladek
  0 siblings, 1 reply; 34+ messages in thread
From: Miroslav Benes @ 2018-09-03 16:00 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel


> -#define klp_for_each_object(patch, obj) \
> +#define klp_for_each_object_static(patch, obj) \
>  	for (obj = patch->objs; obj->funcs || obj->name; obj++)
>  
> -#define klp_for_each_func(obj, func) \
> +#define klp_for_each_object(patch, obj)	\
> +	list_for_each_entry(obj, &patch->obj_list, node)
> +
> +#define klp_for_each_func_static(obj, func) \
>  	for (func = obj->funcs; \
>  	     func->old_name || func->new_addr || func->old_sympos; \
>  	     func++)
>  
> +#define klp_for_each_func(obj, func)	\
> +	list_for_each_entry(func, &obj->func_list, node)
> +
>  int klp_enable_patch(struct klp_patch *);
>  
>  void arch_klp_init_object_loaded(struct klp_patch *patch,
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index 6a47b36a6c9a..7bc23a106b5b 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -50,6 +50,21 @@ LIST_HEAD(klp_patches);
>  
>  static struct kobject *klp_root_kobj;
>  
> +static void klp_init_lists(struct klp_patch *patch)
> +{
> +	struct klp_object *obj;
> +	struct klp_func *func;
> +
> +	INIT_LIST_HEAD(&patch->obj_list);
> +	klp_for_each_object_static(patch, obj) {
> +		list_add(&obj->node, &patch->obj_list);
> +
> +		INIT_LIST_HEAD(&obj->func_list);
> +		klp_for_each_func_static(obj, func)
> +			list_add(&func->node, &obj->func_list);
> +	}
> +}
> +
>  static bool klp_is_module(struct klp_object *obj)
>  {
>  	return obj->name;
> @@ -664,6 +679,7 @@ static int klp_init_patch(struct klp_patch *patch)
>  	patch->module_put = false;
>  	INIT_LIST_HEAD(&patch->list);
>  	init_completion(&patch->finish);
> +	klp_init_lists(patch);

This could explode easily if patch->objs is NULL. The check is just below.
  
>  	if (!patch->objs)
>  		return -EINVAL;

Here.

klp_init_lists() calls klp_for_each_object_static() which accesses 
obj->name without a check for obj. One could argue that it is a 
responsibility of the user not to do such a silly thing, but since the 
check is already there, could you move klp_init_lists() call after the 
check?

Regards,
Miroslav

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 09/12] livepatch: Remove Nop structures when unused
  2018-08-28 14:36 ` [PATCH v12 09/12] livepatch: Remove Nop structures when unused Petr Mladek
@ 2018-09-04 14:50   ` Miroslav Benes
  0 siblings, 0 replies; 34+ messages in thread
From: Miroslav Benes @ 2018-09-04 14:50 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel


> +void klp_free_objects_dynamic(struct klp_patch *patch)

should be static.

> +{
> +	__klp_free_objects(patch, false);
> +}
> +

Miroslav

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 10/12] livepatch: Atomic replace and cumulative patches documentation
  2018-08-28 14:36 ` [PATCH v12 10/12] livepatch: Atomic replace and cumulative patches documentation Petr Mladek
@ 2018-09-04 15:15   ` Miroslav Benes
  0 siblings, 0 replies; 34+ messages in thread
From: Miroslav Benes @ 2018-09-04 15:15 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Tue, 28 Aug 2018, Petr Mladek wrote:

> User documentation for the atomic replace feature. It makes it easier
> to maintain livepatches using so-called cumulative patches.

I think the documentation should be updated due to API changes.
 
> Signed-off-by: Petr Mladek <pmladek@suse.com>
> ---
>  Documentation/livepatch/cumulative-patches.txt | 105 +++++++++++++++++++++++++
>  1 file changed, 105 insertions(+)
>  create mode 100644 Documentation/livepatch/cumulative-patches.txt
> 
> diff --git a/Documentation/livepatch/cumulative-patches.txt b/Documentation/livepatch/cumulative-patches.txt
> new file mode 100644
> index 000000000000..206b7f98d270
> --- /dev/null
> +++ b/Documentation/livepatch/cumulative-patches.txt
> @@ -0,0 +1,105 @@
> +===================================
> +Atomic Replace & Cumulative Patches
> +===================================
> +
> +There might be dependencies between livepatches. If multiple patches need
> +to do different changes to the same function(s) then we need to define
> +an order in which the patches will be installed. And function implementations
> +from any newer livepatch must be done on top of the older ones.
> +
> +This might become a maintenance nightmare. Especially if anyone would want
> +to remove a patch that is in the middle of the stack.
> +
> +An elegant solution comes with the feature called "Atomic Replace". It allows
> +to create so called "Cumulative Patches". They include all wanted changes
> +from all older livepatches and completely replace them in one transition.
> +
> +Usage
> +-----
> +
> +The atomic replace can be enabled by setting "replace" flag in struct klp_patch,
> +for example:
> +
> +	static struct klp_patch patch = {
> +		.mod = THIS_MODULE,
> +		.objs = objs,
> +		.replace = true,
> +	};
> +
> +Such a patch is added on top of the livepatch stack when registered. It can
> +be enabled even when some earlier patches have not been enabled yet.

Here.

> +All processes are then migrated to use the code only from the new patch.
> +Once the transition is finished, all older patches are removed from the stack
> +of patches. Even the older not-enabled patches mentioned above. They can
> +even be unregistered and the related modules unloaded.

Here.

> +Ftrace handlers are transparently removed from functions that are no
> +longer modified by the new cumulative patch.
> +
> +As a result, the livepatch authors might maintain sources only for one
> +cumulative patch. It helps to keep the patch consistent while adding or
> +removing various fixes or features.
> +
> +Users could keep only the last patch installed on the system after
> +the transition to has finished. It helps to clearly see what code is
> +actually in use. Also the livepatch might then be seen as a "normal"
> +module that modifies the kernel behavior. The only difference is that
> +it can be updated at runtime without breaking its functionality.
> +
> +
> +Features
> +--------
> +
> +The atomic replace allows:
> +
> +  + Atomically revert some functions in a previous patch while
> +    upgrading other functions.
> +
> +  + Remove eventual performance impact caused by core redirection
> +    for functions that are no longer patched.
> +
> +  + Decrease user confusion about stacking order and what patches are
> +    currently in effect.
> +
> +
> +Limitations:
> +------------
> +
> +  + Replaced patches can no longer be enabled. But if the transition
> +    to the cumulative patch was not forced, the kernel modules with
> +    the older livepatches can be removed and eventually added again.

I'd rewrite even this.

> +    A good practice is to set .replace flag in any released livepatch.
> +    Then re-adding an older livepatch is equivalent to downgrading
> +    to that patch. This is safe as long as the livepatches do _not_ do
> +    extra modifications in (un)patching callbacks or in the module_init()
> +    or module_exit() functions, see below.
> +
> +
> +  + Only the (un)patching callbacks from the _new_ cumulative livepatch are
> +    executed. Any callbacks from the replaced patches are ignored.
> +
> +    By other words, the cumulative patch is responsible for doing any actions
> +    that are necessary to properly replace any older patch.

s/By other words/In other words/

> +    As a result, it might be dangerous to replace newer cumulative patches by
> +    older ones. The old livepatches might not provide the necessary callbacks.
> +
> +    This might be seen as a limitation in some scenarios. But it makes the life
> +    easier in many others. Only the new cumulative livepatch knows what
> +    fixes/features are added/removed and what special actions are necessary
> +    for a smooth transition.
> +
> +    In each case, it would be a nightmare to think about the order of
> +    the various callbacks and their interactions if the callbacks from all
> +    enabled patches were called.

s/In each case/In any case/ ?

> +  + There is no special handling of shadow variables. Livepatch authors
> +    must create their own rules how to pass them from one cumulative
> +    patch to the other. Especially they should not blindly remove them
> +    in module_exit() functions.
> +
> +    A good practice might be to remove shadow variables in the post-unpatch
> +    callback. It is called only when the livepatch is properly disabled.

Miroslav

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 06/12] livepatch: Simplify API by removing registration step
  2018-08-28 14:35 ` [PATCH v12 06/12] livepatch: Simplify API by removing registration step Petr Mladek
@ 2018-09-05  9:34   ` Miroslav Benes
  2018-10-12 13:01     ` Petr Mladek
  0 siblings, 1 reply; 34+ messages in thread
From: Miroslav Benes @ 2018-09-05  9:34 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Tue, 28 Aug 2018, Petr Mladek wrote:

> The possibility to re-enable a registered patch was useful for immediate
> patches where the livepatch module had to stay until the system reboot.
> The improved consistency model allows to achieve the same result by
> unloading and loading the livepatch module again.
> 
> Also we are going to add a feature called atomic replace. It will allow
> to create a patch that would replace all already registered patches. The
> aim is to handle dependent patches a more secure way. It will obsolete

"in a more secure way", or "more securely" is maybe even better.

> the stack of patches that helped to handle the dependencies so far.
> Then it might be unclear when a cumulative patch re-enabling is safe.
> 
> It would be complicated to support the many modes. Instead we could
> actually make the API and code easier.

s/easier/simpler/ ?

or "easier to understand" ?
 
> This patch removes the two step public API. All the checks and init calls
> are moved from klp_register_patch() to klp_enabled_patch(). Also the patch
> is automatically freed, including the sysfs interface when the transition
> to the disabled state is completed.
> 
> As a result, there is newer a disabled patch on the top of the stack.

s/newer/never/

> Therefore we do not need to check the stack in __klp_enable_patch().
> And we could simplify the check in __klp_disable_patch().
> 
> Also the API and logic is much easier. It is enough to call
> klp_enable_patch() in module_init() call. The patch patch can be disabled
> by writing '0' into /sys/kernel/livepatch/<patch>/enabled. Then the module
> can be removed once the transition finishes and sysfs interface is freed.

I think it would be good to discuss our sysfs interface here as well.

Writing '1' to enabled attribute now makes sense only when you need to 
reverse an unpatching transition. Writing '0' means "disable" or a 
reversion again.

Wouldn't be better to split it to two different attributes? Something like 
"disable" and "reverse"? It could be more intuitive.

Maybe we'd also find out that even patch->enabled member is not useful 
anymore in such case.

> diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt
> index 2d7ed09dbd59..7fb01d27d81d 100644
> --- a/Documentation/livepatch/livepatch.txt
> +++ b/Documentation/livepatch/livepatch.txt
> @@ -14,10 +14,8 @@ Table of Contents:

[...]

> -5.2. Enabling
> +5.1. Enabling
>  -------------
>  
> -Registered patches might be enabled either by calling klp_enable_patch() or
> -by writing '1' to /sys/kernel/livepatch/<name>/enabled. The system will
> -start using the new implementation of the patched functions at this stage.
> +Livepatch modules have to call klp_enable_patch() in module_init() callback.
> +This function is rather complex and might even fail in the early phase.
>  
> -When a patch is enabled, livepatch enters into a transition state where
> -tasks are converging to the patched state.  This is indicated by a value
> -of '1' in /sys/kernel/livepatch/<name>/transition.  Once all tasks have
> -been patched, the 'transition' value changes to '0'.  For more
> -information about this process, see the "Consistency model" section.
> +First, the addresses of the patched functions are found according to their
> +names. The special relocations, mentioned in the section "New functions",
> +are applied. The relevant entries are created under
> +/sys/kernel/livepatch/<name>. The patch is rejected when any above
> +operation fails.
>  
> -If an original function is patched for the first time, a function
> -specific struct klp_ops is created and an universal ftrace handler is
> -registered.
> +Third, livepatch enters into a transition state where tasks are converging

s/Third/Second/ ?

[...]

> @@ -655,116 +660,38 @@ static int klp_init_patch(struct klp_patch *patch)
>  	struct klp_object *obj;
>  	int ret;
>  
> -	if (!patch->objs)
> -		return -EINVAL;
> -
> -	mutex_lock(&klp_mutex);
> -
>  	patch->enabled = false;
> -	patch->forced = false;
> +	patch->module_put = false;
>  	INIT_LIST_HEAD(&patch->list);
>  	init_completion(&patch->finish);
>  
> +	if (!patch->objs)
> +		return -EINVAL;
> +
> +	/*
> +	 * A reference is taken on the patch module to prevent it from being
> +	 * unloaded.
> +	 */
> +	if (!try_module_get(patch->mod))
> +		return -ENODEV;
> +	patch->module_put = true;
> +
>  	ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
>  				   klp_root_kobj, "%s", patch->mod->name);
>  	if (ret) {
> -		mutex_unlock(&klp_mutex);
>  		return ret;
>  	}

{ } are not necessary after the change.

[...]

>  static int __klp_enable_patch(struct klp_patch *patch)
>  {
>  	struct klp_object *obj;
> @@ -846,17 +740,8 @@ static int __klp_enable_patch(struct klp_patch *patch)
>  	if (WARN_ON(patch->enabled))
>  		return -EINVAL;
>  
> -	/* enforce stacking: only the first disabled patch can be enabled */
> -	if (patch->list.prev != &klp_patches &&
> -	    !list_prev_entry(patch, list)->enabled)
> -		return -EBUSY;
> -
> -	/*
> -	 * A reference is taken on the patch module to prevent it from being
> -	 * unloaded.
> -	 */
> -	if (!try_module_get(patch->mod))
> -		return -ENODEV;
> +	if (!patch->kobj.state_initialized)
> +		return -EINVAL;

I think the check is not needed here. __klp_enable_patch() is called right after
klp_init_patch() in klp_enable_patch().

>  	pr_notice("enabling patch '%s'\n", patch->mod->name);
>  

[...]

> @@ -405,7 +399,11 @@ void klp_try_complete_transition(void)
>  	}
>  
>  	/* we're done, now cleanup the data structures */
> +	patch = klp_transition_patch;
>  	klp_complete_transition();
> +
> +	if (!patch->enabled)
> +		klp_free_patch_nowait(patch);
>  }

I'd welcome a comment here. I thought it was more logical to call
klp_free_patch_nowait() in klp_complete_transition(). It's not possible though.
klp_complete_transition() is also called from klp_cancel_transition() which has
its own freeing in klp_enable_patch()'s error path.
 
Regards,
Miroslav

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 00/12]
  2018-08-30 11:58 ` [PATCH v12 00/12] Miroslav Benes
@ 2018-10-11 12:48   ` Petr Mladek
  0 siblings, 0 replies; 34+ messages in thread
From: Petr Mladek @ 2018-10-11 12:48 UTC (permalink / raw)
  To: Miroslav Benes
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Thu 2018-08-30 13:58:15, Miroslav Benes wrote:
> On Tue, 28 Aug 2018, Petr Mladek wrote:
> 
> > livepatch: Atomic replace feature
> > 
> > The atomic replace allows to create cumulative patches. They
> > are useful when you maintain many livepatches and want to remove
> > one that is lower on the stack. In addition it is very useful when
> > more patches touch the same function and there are dependencies
> > between them.
> > 
> > This version does another big refactoring based on feedback against
> > v11[*]. In particular, it removes the registration step, changes
> > the API and handling of livepatch dependencies. The aim is
> > to keep the number of possible variants on a sane level.
> > It helps the keep the feature "easy" to use and maintain.
> > 
> > [*] https://lkml.kernel.org/r/20180323120028.31451-1-pmladek@suse.com
> 
> Hi,
> 
> I've started to review the patch set. Running selftests with lockdep 
> enabled gives me...
> 
> ======================================================
> WARNING: possible circular locking dependency detected
> 4.17.0-rc1-klp_replace_v12-117114-gfedb3eba611d #218 Tainted: G              
> K  
> ------------------------------------------------------
> kworker/1:1/49 is trying to acquire lock:
> 00000000bb88dc17 (kn->count#186){++++}, at: kernfs_remove+0x23/0x40
> 
> but task is already holding lock:
> 0000000073632424 (klp_mutex){+.+.}, at: klp_transition_work_fn+0x17/0x40
> 
> which lock already depends on the new lock.
> 
> 
> the existing dependency chain (in reverse order) is:
> 
> -> #1 (klp_mutex){+.+.}:
>        lock_acquire+0xd4/0x220
>        __mutex_lock+0x75/0x920
>        mutex_lock_nested+0x1b/0x20
>        enabled_store+0x47/0x150
>        kobj_attr_store+0x12/0x20
>        sysfs_kf_write+0x4a/0x60
>        kernfs_fop_write+0x123/0x1b0
>        __vfs_write+0x2b/0x150
>        vfs_write+0xc7/0x1c0
>        ksys_write+0x49/0xa0
>        __x64_sys_write+0x1a/0x20
>        do_syscall_64+0x62/0x1b0
>        entry_SYSCALL_64_after_hwframe+0x49/0xbe
> 
> -> #0 (kn->count#186){++++}:
>        __lock_acquire+0xe9d/0x1240
>        lock_acquire+0xd4/0x220
>        __kernfs_remove+0x23c/0x2c0
>        kernfs_remove+0x23/0x40
>        sysfs_remove_dir+0x51/0x60
>        kobject_del+0x18/0x50
>        kobject_cleanup+0x4b/0x180
>        kobject_put+0x2a/0x50
>        __klp_free_patch+0x5b/0x60
>        klp_free_patch_nowait+0x12/0x30
>        klp_try_complete_transition+0x13e/0x180
>        klp_transition_work_fn+0x26/0x40
>        process_one_work+0x1d8/0x5d0
>        worker_thread+0x4d/0x3d0
>        kthread+0x113/0x150
>        ret_from_fork+0x3a/0x50
> 
> other info that might help us debug this:
> 
>  Possible unsafe locking scenario:
> 
>        CPU0                    CPU1
>        ----                    ----
>   lock(klp_mutex);
>                                lock(kn->count#186);
>                                lock(klp_mutex);
>   lock(kn->count#186);

Sigh, I overestimated the power of kobjects. I thought that this
must have been a false positive but it was not.

1. kernfs_fop_write() ignores kobj->kref. It takes care only
   of its own reference count, see kernfs_get_active().

2. kobj_put() takes care only of kobj->kref. The following
   code is called when the reference count reaches zero:

   + kobj_put()
     + kref_put()
       + kobject_release()
         + kobject_cleanup()
           + kobject_del()
	     + sysfs_remove_dir()
	       + kernfs_remove()
	         + __kernfs_remove().
		   + kernfs_drain()

    , where kernfs_drain() waits until all opened files
    are closed.

Now, we call kobj_put() under klp_mutex() when the sysfs
interface still exists. Files can be opened for
writing. As a result:

   + enabled_store() might wait for klp_mutex

   + kernfs_drain() would wait for enabled_store()
     with klp_mutex() taken.


I have reproduced this with some extra sleeps.

I am going to work on another solution.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 04/12] livepatch: Consolidate klp_free functions
  2018-08-31 10:39   ` Miroslav Benes
@ 2018-10-12 11:43     ` Petr Mladek
  0 siblings, 0 replies; 34+ messages in thread
From: Petr Mladek @ 2018-10-12 11:43 UTC (permalink / raw)
  To: Miroslav Benes
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Fri 2018-08-31 12:39:23, Miroslav Benes wrote:
> On Tue, 28 Aug 2018, Petr Mladek wrote:
> 
> > The code for freeing livepatch structures is a bit scattered and tricky:
> > 
> >   + direct calls to klp_free_*_limited() and kobject_put() are
> >     used to release partially initialized objects
> > 
> >   + klp_free_patch() removes the patch from the public list
> >     and releases all objects except for patch->kobj
> > 
> >   + object_put(&patch->kobj) and the related wait_for_completion()
> >     are called directly outside klp_mutex; this code is duplicated;
> > 
> > Now, we are going to remove the registration stage to simplify the API
> > and the code. This would require handling more situations in
> > klp_enable_patch() error paths.
> > 
> > More importantly, we are going to add a feature called atomic replace.
> > It will need to dynamically create func and object structures. We will
> > want to reuse the existing init() and free() functions. This would
> > create even more error path scenarios.
> > 
> > This patch implements a more clever free functions:
> > 
> >   + checks kobj.state_initialized instead of @limit
> > 
> >   + initializes patch->list early so that the check for empty list
> >     always works
> > 
> >   + The action(s) that has to be done outside klp_mutex are done
> >     in separate klp_free_patch_end() function. It waits only
> >     when patch->kobj was really released via the _begin() part.
> > 
> > Note that it is safe to put patch->kobj under klp_mutex. It calls
> > the release callback only when the reference count reaches zero.
> > Therefore it does not block any related sysfs operation that took
> > a reference and might eventually wait for klp_mutex.
> 
> This seems to be the reason of the issue which lockdep reported. The patch 
> moved kobject_put(&patch->kobj) under klp_mutex. Perhaps I cannot read 
> kernfs code properly today, but I fail to understand why it is supposed to 
> be safe.

My expectation was that any read/write operation on the related
sysfs interface took reference of the kobject. Then kobject_put()
would just decrement a reference counter and postpone the real
removal until all other operations were finished.

But it seems that the read/write operations take reference on
another (kernfs_node) object and do not block releasing the kobject
by kobject_put().

> > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> > index b3956cce239e..3ca404545150 100644
> > --- a/kernel/livepatch/core.c
> > +++ b/kernel/livepatch/core.c
> > @@ -465,17 +465,15 @@ static struct kobj_type klp_ktype_func = {
> >  	.sysfs_ops = &kobj_sysfs_ops,
> >  };
> >  
> > -/*
> > - * Free all functions' kobjects in the array up to some limit. When limit is
> > - * NULL, all kobjects are freed.
> > - */
> > -static void klp_free_funcs_limited(struct klp_object *obj,
> > -				   struct klp_func *limit)
> > +static void klp_free_funcs(struct klp_object *obj)
> >  {
> >  	struct klp_func *func;
> >  
> > -	for (func = obj->funcs; func->old_name && func != limit; func++)
> > -		kobject_put(&func->kobj);
> > +	klp_for_each_func(obj, func) {
> > +		/* Might be called from klp_init_patch() error path. */
> > +		if (func->kobj.state_initialized)
> > +			kobject_put(&func->kobj);
> > +	}
> >  }
> 
> Just for the record, it is a slightly suboptimal because now we iterate 
> through the whole list. We could add break to else branch, I think, but 
> it's not necessary.

Interesting optimization. It would keep the limit and work at this
stage.

But it would stop working once we add the dynamically allocated
structures. They are allocated and initialized in two separate cycles.
We need to free all allocated structures when any initialization
fails.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 07/12] livepatch: Use lists to manage patches, objects and functions
  2018-09-03 16:00   ` Miroslav Benes
@ 2018-10-12 12:12     ` Petr Mladek
  0 siblings, 0 replies; 34+ messages in thread
From: Petr Mladek @ 2018-10-12 12:12 UTC (permalink / raw)
  To: Miroslav Benes
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Mon 2018-09-03 18:00:45, Miroslav Benes wrote:
> 
> > -#define klp_for_each_object(patch, obj) \
> > +#define klp_for_each_object_static(patch, obj) \
> >  	for (obj = patch->objs; obj->funcs || obj->name; obj++)
> >  
> > -#define klp_for_each_func(obj, func) \
> > +#define klp_for_each_object(patch, obj)	\
> > +	list_for_each_entry(obj, &patch->obj_list, node)
> > +
> > +#define klp_for_each_func_static(obj, func) \
> >  	for (func = obj->funcs; \
> >  	     func->old_name || func->new_addr || func->old_sympos; \
> >  	     func++)
> >  
> > +#define klp_for_each_func(obj, func)	\
> > +	list_for_each_entry(func, &obj->func_list, node)
> > +
> >  int klp_enable_patch(struct klp_patch *);
> >  
> >  void arch_klp_init_object_loaded(struct klp_patch *patch,
> > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> > index 6a47b36a6c9a..7bc23a106b5b 100644
> > --- a/kernel/livepatch/core.c
> > +++ b/kernel/livepatch/core.c
> > @@ -50,6 +50,21 @@ LIST_HEAD(klp_patches);
> >  
> >  static struct kobject *klp_root_kobj;
> >  
> > +static void klp_init_lists(struct klp_patch *patch)
> > +{
> > +	struct klp_object *obj;
> > +	struct klp_func *func;
> > +
> > +	INIT_LIST_HEAD(&patch->obj_list);
> > +	klp_for_each_object_static(patch, obj) {
> > +		list_add(&obj->node, &patch->obj_list);
> > +
> > +		INIT_LIST_HEAD(&obj->func_list);
> > +		klp_for_each_func_static(obj, func)
> > +			list_add(&func->node, &obj->func_list);
> > +	}
> > +}
> > +
> >  static bool klp_is_module(struct klp_object *obj)
> >  {
> >  	return obj->name;
> > @@ -664,6 +679,7 @@ static int klp_init_patch(struct klp_patch *patch)
> >  	patch->module_put = false;
> >  	INIT_LIST_HEAD(&patch->list);
> >  	init_completion(&patch->finish);
> > +	klp_init_lists(patch);
> 
> This could explode easily if patch->objs is NULL. The check is just below.
>   
> >  	if (!patch->objs)
> >  		return -EINVAL;
> 
> Here.
> 
> klp_init_lists() calls klp_for_each_object_static() which accesses 
> obj->name without a check for obj.

Great catch!

> since the check is already there, could you move klp_init_lists()
> call after the check?

The same problem is with accessing obj->funcs by
klp_for_each_func_static().

I have moved both checks into klp_init_lists(). I made sure that
the lists were initialized to be usable with the free functions
even in case of error.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 06/12] livepatch: Simplify API by removing registration step
  2018-09-05  9:34   ` Miroslav Benes
@ 2018-10-12 13:01     ` Petr Mladek
  2018-10-15 16:01       ` Miroslav Benes
  0 siblings, 1 reply; 34+ messages in thread
From: Petr Mladek @ 2018-10-12 13:01 UTC (permalink / raw)
  To: Miroslav Benes
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Wed 2018-09-05 11:34:06, Miroslav Benes wrote:
> On Tue, 28 Aug 2018, Petr Mladek wrote:
> > Also the API and logic is much easier. It is enough to call
> > klp_enable_patch() in module_init() call. The patch patch can be disabled
> > by writing '0' into /sys/kernel/livepatch/<patch>/enabled. Then the module
> > can be removed once the transition finishes and sysfs interface is freed.
> 
> I think it would be good to discuss our sysfs interface here as well.
> 
> Writing '1' to enabled attribute now makes sense only when you need to 
> reverse an unpatching transition. Writing '0' means "disable" or a 
> reversion again.
> 
> Wouldn't be better to split it to two different attributes? Something like 
> "disable" and "reverse"? It could be more intuitive.
> 
> Maybe we'd also find out that even patch->enabled member is not useful 
> anymore in such case.

I though about this as well. I kept "enabled" because:

  + It keeps the public interface the same as before. Most people
    would not notice any change in the behavior except maybe that
    the interface disappears when the patch gets disabled.

  + The reverse operation makes most sense when the transition
    cannot get finished. In theory, it might be problem to
    finish even the reversed one. People might want to
    reverse once again and force it. Then "reverse" file
    might be confusing. They might not know in which direction
    they do the reverse.


> > @@ -846,17 +740,8 @@ static int __klp_enable_patch(struct klp_patch *patch)
> >  	if (WARN_ON(patch->enabled))
> >  		return -EINVAL;
> >  
> > -	/* enforce stacking: only the first disabled patch can be enabled */
> > -	if (patch->list.prev != &klp_patches &&
> > -	    !list_prev_entry(patch, list)->enabled)
> > -		return -EBUSY;
> > -
> > -	/*
> > -	 * A reference is taken on the patch module to prevent it from being
> > -	 * unloaded.
> > -	 */
> > -	if (!try_module_get(patch->mod))
> > -		return -ENODEV;
> > +	if (!patch->kobj.state_initialized)
> > +		return -EINVAL;
> 
> I think the check is not needed here. __klp_enable_patch() is called right after
> klp_init_patch() in klp_enable_patch().

I would keep it. Someone might want to call this also from other
location. Even we used to do it from enable_store() ;-)

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 06/12] livepatch: Simplify API by removing registration step
  2018-10-12 13:01     ` Petr Mladek
@ 2018-10-15 16:01       ` Miroslav Benes
  2018-10-18 14:54         ` Petr Mladek
  0 siblings, 1 reply; 34+ messages in thread
From: Miroslav Benes @ 2018-10-15 16:01 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Fri, 12 Oct 2018, Petr Mladek wrote:

> On Wed 2018-09-05 11:34:06, Miroslav Benes wrote:
> > On Tue, 28 Aug 2018, Petr Mladek wrote:
> > > Also the API and logic is much easier. It is enough to call
> > > klp_enable_patch() in module_init() call. The patch patch can be disabled
> > > by writing '0' into /sys/kernel/livepatch/<patch>/enabled. Then the module
> > > can be removed once the transition finishes and sysfs interface is freed.
> > 
> > I think it would be good to discuss our sysfs interface here as well.
> > 
> > Writing '1' to enabled attribute now makes sense only when you need to 
> > reverse an unpatching transition. Writing '0' means "disable" or a 
> > reversion again.
> > 
> > Wouldn't be better to split it to two different attributes? Something like 
> > "disable" and "reverse"? It could be more intuitive.
> > 
> > Maybe we'd also find out that even patch->enabled member is not useful 
> > anymore in such case.
> 
> I though about this as well. I kept "enabled" because:
> 
>   + It keeps the public interface the same as before. Most people
>     would not notice any change in the behavior except maybe that
>     the interface disappears when the patch gets disabled.

Well our sysfs interface is still in a testing phase as far as ABI is 
involved. Moreover, each live patch is bound to its base kernel by 
definition anyway. So we can change this without remorse, I think.
 
>   + The reverse operation makes most sense when the transition
>     cannot get finished. In theory, it might be problem to
>     finish even the reversed one. People might want to
>     reverse once again and force it. Then "reverse" file
>     might be confusing. They might not know in which direction
>     they do the reverse.

I still think it would be better to have a less confusing interface and it 
would outweigh the second remark.
 
> > > @@ -846,17 +740,8 @@ static int __klp_enable_patch(struct klp_patch *patch)
> > >  	if (WARN_ON(patch->enabled))
> > >  		return -EINVAL;
> > >  
> > > -	/* enforce stacking: only the first disabled patch can be enabled */
> > > -	if (patch->list.prev != &klp_patches &&
> > > -	    !list_prev_entry(patch, list)->enabled)
> > > -		return -EBUSY;
> > > -
> > > -	/*
> > > -	 * A reference is taken on the patch module to prevent it from being
> > > -	 * unloaded.
> > > -	 */
> > > -	if (!try_module_get(patch->mod))
> > > -		return -ENODEV;
> > > +	if (!patch->kobj.state_initialized)
> > > +		return -EINVAL;
> > 
> > I think the check is not needed here. __klp_enable_patch() is called right after
> > klp_init_patch() in klp_enable_patch().
> 
> I would keep it. Someone might want to call this also from other
> location. Even we used to do it from enable_store() ;-)

Ok, I don't mind in the end.

Miroslav

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 06/12] livepatch: Simplify API by removing registration step
  2018-10-15 16:01       ` Miroslav Benes
@ 2018-10-18 14:54         ` Petr Mladek
  2018-10-18 15:30           ` Josh Poimboeuf
  0 siblings, 1 reply; 34+ messages in thread
From: Petr Mladek @ 2018-10-18 14:54 UTC (permalink / raw)
  To: Miroslav Benes
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Mon 2018-10-15 18:01:43, Miroslav Benes wrote:
> On Fri, 12 Oct 2018, Petr Mladek wrote:
> 
> > On Wed 2018-09-05 11:34:06, Miroslav Benes wrote:
> > > On Tue, 28 Aug 2018, Petr Mladek wrote:
> > > > Also the API and logic is much easier. It is enough to call
> > > > klp_enable_patch() in module_init() call. The patch patch can be disabled
> > > > by writing '0' into /sys/kernel/livepatch/<patch>/enabled. Then the module
> > > > can be removed once the transition finishes and sysfs interface is freed.
> > > 
> > > I think it would be good to discuss our sysfs interface here as well.
> > > 
> > > Writing '1' to enabled attribute now makes sense only when you need to 
> > > reverse an unpatching transition. Writing '0' means "disable" or a 
> > > reversion again.
> > > 
> > > Wouldn't be better to split it to two different attributes? Something like 
> > > "disable" and "reverse"? It could be more intuitive.
> > > 
> > > Maybe we'd also find out that even patch->enabled member is not useful 
> > > anymore in such case.
> > 
> > I though about this as well. I kept "enabled" because:
> > 
> >   + It keeps the public interface the same as before. Most people
> >     would not notice any change in the behavior except maybe that
> >     the interface disappears when the patch gets disabled.
> 
> Well our sysfs interface is still in a testing phase as far as ABI is 
> involved. Moreover, each live patch is bound to its base kernel by 
> definition anyway. So we can change this without remorse, I think.
>  
> >   + The reverse operation makes most sense when the transition
> >     cannot get finished. In theory, it might be problem to
> >     finish even the reversed one. People might want to
> >     reverse once again and force it. Then "reverse" file
> >     might be confusing. They might not know in which direction
> >     they do the reverse.
> 
> I still think it would be better to have a less confusing interface and it 
> would outweigh the second remark.

OK, what about having just "disable" in sysfs. I agree that it makes
much more sense than "enable" now.

It might be used also for the reverse operation the same way as
"enable" was used before. I think that standalone "reverse" might
be confusing when we allow to reverse the operation in both
directions.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 06/12] livepatch: Simplify API by removing registration step
  2018-10-18 14:54         ` Petr Mladek
@ 2018-10-18 15:30           ` Josh Poimboeuf
  2018-10-19 12:16             ` Miroslav Benes
  0 siblings, 1 reply; 34+ messages in thread
From: Josh Poimboeuf @ 2018-10-18 15:30 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Miroslav Benes, Jiri Kosina, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Thu, Oct 18, 2018 at 04:54:56PM +0200, Petr Mladek wrote:
> On Mon 2018-10-15 18:01:43, Miroslav Benes wrote:
> > On Fri, 12 Oct 2018, Petr Mladek wrote:
> > 
> > > On Wed 2018-09-05 11:34:06, Miroslav Benes wrote:
> > > > On Tue, 28 Aug 2018, Petr Mladek wrote:
> > > > > Also the API and logic is much easier. It is enough to call
> > > > > klp_enable_patch() in module_init() call. The patch patch can be disabled
> > > > > by writing '0' into /sys/kernel/livepatch/<patch>/enabled. Then the module
> > > > > can be removed once the transition finishes and sysfs interface is freed.
> > > > 
> > > > I think it would be good to discuss our sysfs interface here as well.
> > > > 
> > > > Writing '1' to enabled attribute now makes sense only when you need to 
> > > > reverse an unpatching transition. Writing '0' means "disable" or a 
> > > > reversion again.
> > > > 
> > > > Wouldn't be better to split it to two different attributes? Something like 
> > > > "disable" and "reverse"? It could be more intuitive.
> > > > 
> > > > Maybe we'd also find out that even patch->enabled member is not useful 
> > > > anymore in such case.
> > > 
> > > I though about this as well. I kept "enabled" because:
> > > 
> > >   + It keeps the public interface the same as before. Most people
> > >     would not notice any change in the behavior except maybe that
> > >     the interface disappears when the patch gets disabled.
> > 
> > Well our sysfs interface is still in a testing phase as far as ABI is 
> > involved. Moreover, each live patch is bound to its base kernel by 
> > definition anyway. So we can change this without remorse, I think.

But it would break tooling, which is not kernel specific.  I'm not sure
whether it would be worth the headache.  After all I think the livepatch
sysfs interface is designed for tools, not humans.

> > >   + The reverse operation makes most sense when the transition
> > >     cannot get finished. In theory, it might be problem to
> > >     finish even the reversed one. People might want to
> > >     reverse once again and force it. Then "reverse" file
> > >     might be confusing. They might not know in which direction
> > >     they do the reverse.
> > 
> > I still think it would be better to have a less confusing interface and it 
> > would outweigh the second remark.
> 
> OK, what about having just "disable" in sysfs. I agree that it makes
> much more sense than "enable" now.
> 
> It might be used also for the reverse operation the same way as
> "enable" was used before. I think that standalone "reverse" might
> be confusing when we allow to reverse the operation in both
> directions.

As long as we're talking about radical changes... how about we just
don't allow disabling patches at all?  Instead a patch can be replaced
with a 'revert' patch, or an empty 'nop' patch.  That would make our
code simpler and also ensure there's an audit trail.

(Apologies if we've already talked about this.  My brain is still mushy
thanks to Spectre and friends.)

The amount of flexibility we allow is kind of crazy, considering how
delicate of an operation live patching is.  That reminds me that I
should bring up my other favorite idea at LPC: require modules to be
loaded before we "patch" them.

-- 
Josh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 06/12] livepatch: Simplify API by removing registration step
  2018-10-18 15:30           ` Josh Poimboeuf
@ 2018-10-19 12:16             ` Miroslav Benes
  2018-10-19 14:36               ` Josh Poimboeuf
  0 siblings, 1 reply; 34+ messages in thread
From: Miroslav Benes @ 2018-10-19 12:16 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Petr Mladek, Jiri Kosina, Jason Baron, Joe Lawrence, Jessica Yu,
	Evgenii Shatokhin, live-patching, linux-kernel

On Thu, 18 Oct 2018, Josh Poimboeuf wrote:

> On Thu, Oct 18, 2018 at 04:54:56PM +0200, Petr Mladek wrote:
> > On Mon 2018-10-15 18:01:43, Miroslav Benes wrote:
> > > On Fri, 12 Oct 2018, Petr Mladek wrote:
> > > 
> > > > On Wed 2018-09-05 11:34:06, Miroslav Benes wrote:
> > > > > On Tue, 28 Aug 2018, Petr Mladek wrote:
> > > > > > Also the API and logic is much easier. It is enough to call
> > > > > > klp_enable_patch() in module_init() call. The patch patch can be disabled
> > > > > > by writing '0' into /sys/kernel/livepatch/<patch>/enabled. Then the module
> > > > > > can be removed once the transition finishes and sysfs interface is freed.
> > > > > 
> > > > > I think it would be good to discuss our sysfs interface here as well.
> > > > > 
> > > > > Writing '1' to enabled attribute now makes sense only when you need to 
> > > > > reverse an unpatching transition. Writing '0' means "disable" or a 
> > > > > reversion again.
> > > > > 
> > > > > Wouldn't be better to split it to two different attributes? Something like 
> > > > > "disable" and "reverse"? It could be more intuitive.
> > > > > 
> > > > > Maybe we'd also find out that even patch->enabled member is not useful 
> > > > > anymore in such case.
> > > > 
> > > > I though about this as well. I kept "enabled" because:
> > > > 
> > > >   + It keeps the public interface the same as before. Most people
> > > >     would not notice any change in the behavior except maybe that
> > > >     the interface disappears when the patch gets disabled.
> > > 
> > > Well our sysfs interface is still in a testing phase as far as ABI is 
> > > involved. Moreover, each live patch is bound to its base kernel by 
> > > definition anyway. So we can change this without remorse, I think.
> 
> But it would break tooling, which is not kernel specific.  I'm not sure
> whether it would be worth the headache.  After all I think the livepatch
> sysfs interface is designed for tools, not humans.

You're right. It's probably not worth it. Oh well.
 
> > > >   + The reverse operation makes most sense when the transition
> > > >     cannot get finished. In theory, it might be problem to
> > > >     finish even the reversed one. People might want to
> > > >     reverse once again and force it. Then "reverse" file
> > > >     might be confusing. They might not know in which direction
> > > >     they do the reverse.
> > > 
> > > I still think it would be better to have a less confusing interface and it 
> > > would outweigh the second remark.
> > 
> > OK, what about having just "disable" in sysfs. I agree that it makes
> > much more sense than "enable" now.
> > 
> > It might be used also for the reverse operation the same way as
> > "enable" was used before. I think that standalone "reverse" might
> > be confusing when we allow to reverse the operation in both
> > directions.
> 
> As long as we're talking about radical changes... how about we just
> don't allow disabling patches at all?  Instead a patch can be replaced
> with a 'revert' patch, or an empty 'nop' patch.  That would make our
> code simpler and also ensure there's an audit trail.
> 
> (Apologies if we've already talked about this.  My brain is still mushy
> thanks to Spectre and friends.)

I think we talked about it last year in Prague and I think we convinced 
you that it was not a good idea (...not to allow disabling patches at 
all).

BUT! Empty 'nop' patch is a new idea and we may certainly discuss it.

> The amount of flexibility we allow is kind of crazy, considering how
> delicate of an operation live patching is.  That reminds me that I
> should bring up my other favorite idea at LPC: require modules to be
> loaded before we "patch" them.

We talked about this as well and if I remember correctly we came to a 
conclusion that it is all about a distribution and maintenance. We cannot 
ask customers to load modules they do not need just because we need to 
patch them. One cumulative patch is not that great in this case. I 
remember you had a crazy idea how to solve it, but I don't remember 
details. My notes from the event say...

	- livepatch code complexity
		- make it synchronous with respect to modules loading
		- Josh's crazy idea

That's not much :D

So yes, we can talk about it and hopefully make proper notes this time.

Miroslav

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 06/12] livepatch: Simplify API by removing registration step
  2018-10-19 12:16             ` Miroslav Benes
@ 2018-10-19 14:36               ` Josh Poimboeuf
  2018-10-22 13:25                 ` Petr Mladek
  0 siblings, 1 reply; 34+ messages in thread
From: Josh Poimboeuf @ 2018-10-19 14:36 UTC (permalink / raw)
  To: Miroslav Benes
  Cc: Petr Mladek, Jiri Kosina, Jason Baron, Joe Lawrence, Jessica Yu,
	Evgenii Shatokhin, live-patching, linux-kernel

On Fri, Oct 19, 2018 at 02:16:19PM +0200, Miroslav Benes wrote:
> On Thu, 18 Oct 2018, Josh Poimboeuf wrote:
> 
> > On Thu, Oct 18, 2018 at 04:54:56PM +0200, Petr Mladek wrote:
> > > On Mon 2018-10-15 18:01:43, Miroslav Benes wrote:
> > > > On Fri, 12 Oct 2018, Petr Mladek wrote:
> > > > 
> > > > > On Wed 2018-09-05 11:34:06, Miroslav Benes wrote:
> > > > > > On Tue, 28 Aug 2018, Petr Mladek wrote:
> > > > > > > Also the API and logic is much easier. It is enough to call
> > > > > > > klp_enable_patch() in module_init() call. The patch patch can be disabled
> > > > > > > by writing '0' into /sys/kernel/livepatch/<patch>/enabled. Then the module
> > > > > > > can be removed once the transition finishes and sysfs interface is freed.
> > > > > > 
> > > > > > I think it would be good to discuss our sysfs interface here as well.
> > > > > > 
> > > > > > Writing '1' to enabled attribute now makes sense only when you need to 
> > > > > > reverse an unpatching transition. Writing '0' means "disable" or a 
> > > > > > reversion again.
> > > > > > 
> > > > > > Wouldn't be better to split it to two different attributes? Something like 
> > > > > > "disable" and "reverse"? It could be more intuitive.
> > > > > > 
> > > > > > Maybe we'd also find out that even patch->enabled member is not useful 
> > > > > > anymore in such case.
> > > > > 
> > > > > I though about this as well. I kept "enabled" because:
> > > > > 
> > > > >   + It keeps the public interface the same as before. Most people
> > > > >     would not notice any change in the behavior except maybe that
> > > > >     the interface disappears when the patch gets disabled.
> > > > 
> > > > Well our sysfs interface is still in a testing phase as far as ABI is 
> > > > involved. Moreover, each live patch is bound to its base kernel by 
> > > > definition anyway. So we can change this without remorse, I think.
> > 
> > But it would break tooling, which is not kernel specific.  I'm not sure
> > whether it would be worth the headache.  After all I think the livepatch
> > sysfs interface is designed for tools, not humans.
> 
> You're right. It's probably not worth it. Oh well.
>  
> > > > >   + The reverse operation makes most sense when the transition
> > > > >     cannot get finished. In theory, it might be problem to
> > > > >     finish even the reversed one. People might want to
> > > > >     reverse once again and force it. Then "reverse" file
> > > > >     might be confusing. They might not know in which direction
> > > > >     they do the reverse.
> > > > 
> > > > I still think it would be better to have a less confusing interface and it 
> > > > would outweigh the second remark.
> > > 
> > > OK, what about having just "disable" in sysfs. I agree that it makes
> > > much more sense than "enable" now.
> > > 
> > > It might be used also for the reverse operation the same way as
> > > "enable" was used before. I think that standalone "reverse" might
> > > be confusing when we allow to reverse the operation in both
> > > directions.
> > 
> > As long as we're talking about radical changes... how about we just
> > don't allow disabling patches at all?  Instead a patch can be replaced
> > with a 'revert' patch, or an empty 'nop' patch.  That would make our
> > code simpler and also ensure there's an audit trail.
> > 
> > (Apologies if we've already talked about this.  My brain is still mushy
> > thanks to Spectre and friends.)
> 
> I think we talked about it last year in Prague and I think we convinced 
> you that it was not a good idea (...not to allow disabling patches at 
> all).
> 
> BUT! Empty 'nop' patch is a new idea and we may certainly discuss it.

I definitely remember talking about it in Prague, but I don't remember
any conclusions.  My livepatch-related brain cache lines have been
flushed thanks to the aforementioned CVEs and my rapidly advancing
senility.

> > The amount of flexibility we allow is kind of crazy, considering how
> > delicate of an operation live patching is.  That reminds me that I
> > should bring up my other favorite idea at LPC: require modules to be
> > loaded before we "patch" them.
> 
> We talked about this as well and if I remember correctly we came to a 
> conclusion that it is all about a distribution and maintenance. We cannot 
> ask customers to load modules they do not need just because we need to 
> patch them.

Fair enough.

> One cumulative patch is not that great in this case. I remember you
> had a crazy idea how to solve it, but I don't remember details. My
> notes from the event say...
> 
> 	- livepatch code complexity
> 		- make it synchronous with respect to modules loading
> 		- Josh's crazy idea
> 
> That's not much :D
> 
> So yes, we can talk about it and hopefully make proper notes this time.

Heh, better notes would be good, otherwise I'll just keep complaining
about the same things every year :-)  I'll try to remember what my crazy
idea was, or maybe come up with some new ones to keep it fresh.

-- 
Josh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 06/12] livepatch: Simplify API by removing registration step
  2018-10-19 14:36               ` Josh Poimboeuf
@ 2018-10-22 13:25                 ` Petr Mladek
  2018-10-23 16:39                   ` Josh Poimboeuf
  0 siblings, 1 reply; 34+ messages in thread
From: Petr Mladek @ 2018-10-22 13:25 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Miroslav Benes, Jiri Kosina, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Fri 2018-10-19 09:36:04, Josh Poimboeuf wrote:
> On Fri, Oct 19, 2018 at 02:16:19PM +0200, Miroslav Benes wrote:
> > On Thu, 18 Oct 2018, Josh Poimboeuf wrote:
> > 
> > > On Thu, Oct 18, 2018 at 04:54:56PM +0200, Petr Mladek wrote:
> > > > OK, what about having just "disable" in sysfs. I agree that it makes
> > > > much more sense than "enable" now.
> > > > 
> > > > It might be used also for the reverse operation the same way as
> > > > "enable" was used before. I think that standalone "reverse" might
> > > > be confusing when we allow to reverse the operation in both
> > > > directions.
> > > 
> > > As long as we're talking about radical changes... how about we just
> > > don't allow disabling patches at all?  Instead a patch can be replaced
> > > with a 'revert' patch, or an empty 'nop' patch.  That would make our
> > > code simpler and also ensure there's an audit trail.
> > > 
> > > (Apologies if we've already talked about this.  My brain is still mushy
> > > thanks to Spectre and friends.)
> > 
> > I think we talked about it last year in Prague and I think we convinced 
> > you that it was not a good idea (...not to allow disabling patches at 
> > all).
> > 
> > BUT! Empty 'nop' patch is a new idea and we may certainly discuss it.
> 
> I definitely remember talking about it in Prague, but I don't remember
> any conclusions.

The revert operation allows to remove a livepatch stuck in the
transition without forcing.

Also implementing empty cumulative patch might be tricky because
of the callbacks. The current proposal is to call callbacks only
from the new livepatch. It helps tp keep the interactions easy
and under control. The way how to take over some change between
an old and new patch depends on the particular functionality.

It would mean that the empty patch might need to be custom.
Users probably would need to ask and wait for it.


> My livepatch-related brain cache lines have been
> flushed thanks to the aforementioned CVEs and my rapidly advancing
> senility.

Uff, I am not the only one.


> > > The amount of flexibility we allow is kind of crazy, considering how
> > > delicate of an operation live patching is.  That reminds me that I
> > > should bring up my other favorite idea at LPC: require modules to be
> > > loaded before we "patch" them.
> > 
> > We talked about this as well and if I remember correctly we came to a 
> > conclusion that it is all about a distribution and maintenance. We cannot 
> > ask customers to load modules they do not need just because we need to 
> > patch them.
> 
> Fair enough.
> 
> > One cumulative patch is not that great in this case. I remember you
> > had a crazy idea how to solve it, but I don't remember details. My
> > notes from the event say...
> > 
> > 	- livepatch code complexity
> > 		- make it synchronous with respect to modules loading
> > 		- Josh's crazy idea
> > 
> > That's not much :D
> > 
> > So yes, we can talk about it and hopefully make proper notes this time.
> 
> Heh, better notes would be good, otherwise I'll just keep complaining
> about the same things every year :-)  I'll try to remember what my crazy
> idea was, or maybe come up with some new ones to keep it fresh.

If we do not want to force users to load all patched modules then
we would need to create a livepatch-per-module. This just moves
the complexity somewhere else.

One big problem would be how to keep the system consistent. You
would need to solve races between loading modules and livepatches
anyway.

For example, you could not load fixed/patched modules when the system
is not fully patched yet. You would need to load the module and
the related livepatch at the same time and follow the consistency
model as we do now.

OK, there was the idea to refuse loading modules when livepatch
transition is in progress. But it might not be acceptable,
especially when the transition gets blocked infinitely
and manual intervention would be needed.

I agree that the current solution adds complexity to
the livepatching code but it is not that complicated.
Races with loading modules and livepatches in parallel
are solved by mod->klp_active flag. There are no other
races because all other operations are done on code
that is not actively used. One good thing is that
everything is in one place and kernel has it under
control.

I am open to discuss it. But we would need to come up with
some clever solution.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 06/12] livepatch: Simplify API by removing registration step
  2018-10-22 13:25                 ` Petr Mladek
@ 2018-10-23 16:39                   ` Josh Poimboeuf
  2018-10-24  2:55                     ` Josh Poimboeuf
  2018-10-24 11:14                     ` Petr Mladek
  0 siblings, 2 replies; 34+ messages in thread
From: Josh Poimboeuf @ 2018-10-23 16:39 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Miroslav Benes, Jiri Kosina, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Mon, Oct 22, 2018 at 03:25:10PM +0200, Petr Mladek wrote:
> On Fri 2018-10-19 09:36:04, Josh Poimboeuf wrote:
> > On Fri, Oct 19, 2018 at 02:16:19PM +0200, Miroslav Benes wrote:
> > > On Thu, 18 Oct 2018, Josh Poimboeuf wrote:
> > > 
> > > > On Thu, Oct 18, 2018 at 04:54:56PM +0200, Petr Mladek wrote:
> > > > > OK, what about having just "disable" in sysfs. I agree that it makes
> > > > > much more sense than "enable" now.
> > > > > 
> > > > > It might be used also for the reverse operation the same way as
> > > > > "enable" was used before. I think that standalone "reverse" might
> > > > > be confusing when we allow to reverse the operation in both
> > > > > directions.
> > > > 
> > > > As long as we're talking about radical changes... how about we just
> > > > don't allow disabling patches at all?  Instead a patch can be replaced
> > > > with a 'revert' patch, or an empty 'nop' patch.  That would make our
> > > > code simpler and also ensure there's an audit trail.
> > > > 
> > > > (Apologies if we've already talked about this.  My brain is still mushy
> > > > thanks to Spectre and friends.)
> > > 
> > > I think we talked about it last year in Prague and I think we convinced 
> > > you that it was not a good idea (...not to allow disabling patches at 
> > > all).
> > > 
> > > BUT! Empty 'nop' patch is a new idea and we may certainly discuss it.
> > 
> > I definitely remember talking about it in Prague, but I don't remember
> > any conclusions.
> 
> The revert operation allows to remove a livepatch stuck in the
> transition without forcing.

True, though I question the real world value of that.

> Also implementing empty cumulative patch might be tricky because
> of the callbacks. The current proposal is to call callbacks only
> from the new livepatch. It helps tp keep the interactions easy
> and under control. The way how to take over some change between
> an old and new patch depends on the particular functionality.

Presumably a 'no-op' patch would be special, in that it would call the
un-patch callbacks.

I think the only *real* benefit of this proposal would be that history
would be a straight line, with no backtracking.  Similar to git rebase
vs merge.  You'd be able to tell what has been applied and reverted just
by looking at what modules are loaded.

But, I now realize that in order for that to be the case, we'd have to
disallow the unloading of replaced modules.  But I think we're going to
end up *allowing* the unloading of replaced modules, right?

So maybe it's not worth it.  I'll drop it.

> It would mean that the empty patch might need to be custom.
> Users probably would need to ask and wait for it.
> 
> 
> > My livepatch-related brain cache lines have been
> > flushed thanks to the aforementioned CVEs and my rapidly advancing
> > senility.
> 
> Uff, I am not the only one.

:-)

> > > > The amount of flexibility we allow is kind of crazy, considering how
> > > > delicate of an operation live patching is.  That reminds me that I
> > > > should bring up my other favorite idea at LPC: require modules to be
> > > > loaded before we "patch" them.
> > > 
> > > We talked about this as well and if I remember correctly we came to a 
> > > conclusion that it is all about a distribution and maintenance. We cannot 
> > > ask customers to load modules they do not need just because we need to 
> > > patch them.
> > 
> > Fair enough.
> > 
> > > One cumulative patch is not that great in this case. I remember you
> > > had a crazy idea how to solve it, but I don't remember details. My
> > > notes from the event say...
> > > 
> > > 	- livepatch code complexity
> > > 		- make it synchronous with respect to modules loading
> > > 		- Josh's crazy idea
> > > 
> > > That's not much :D
> > > 
> > > So yes, we can talk about it and hopefully make proper notes this time.
> > 
> > Heh, better notes would be good, otherwise I'll just keep complaining
> > about the same things every year :-)  I'll try to remember what my crazy
> > idea was, or maybe come up with some new ones to keep it fresh.
> 
> If we do not want to force users to load all patched modules then
> we would need to create a livepatch-per-module. This just moves
> the complexity somewhere else.
> 
> One big problem would be how to keep the system consistent. You
> would need to solve races between loading modules and livepatches
> anyway.
> 
> For example, you could not load fixed/patched modules when the system
> is not fully patched yet. You would need to load the module and
> the related livepatch at the same time and follow the consistency
> model as we do now.
> 
> OK, there was the idea to refuse loading modules when livepatch
> transition is in progress. But it might not be acceptable,
> especially when the transition gets blocked infinitely
> and manual intervention would be needed.
> 
> I agree that the current solution adds complexity to
> the livepatching code but it is not that complicated.
> Races with loading modules and livepatches in parallel
> are solved by mod->klp_active flag. There are no other
> races because all other operations are done on code
> that is not actively used. One good thing is that
> everything is in one place and kernel has it under
> control.
> 
> I am open to discuss it. But we would need to come up with
> some clever solution.

Yeah, I think that's pretty much the crazy idea Miroslav mentioned.  The
patch would consist of several modules.  The parent module would
register the patch and patch vmlinux.  Each child module would be
associated with a to-be-patched module.  The child modules could be
loaded on demand, either by special klp code or by modprobe.

As you described, there would be some races to think about.  However, it
would also have some benefits.

I *hope* it would mean we could get rid of a lot of our ugly hacks, like

- klp symbols, klp relas
- preserving ELF data, PLT's, other horrible arch-specific things
- klp.arch..altinstructions, klp.arch..parainstructions
- manually calling apply_relocate_add()

However... we might still need some of those things for another reason:
to bypass exported symbol protections.  It needs some more
investigation.

Given this discussion, I'm thinking there wouldn't be much to discuss at
LPC for this topic unless we had a prototype to look at (which I won't
have time to do).  So I may drop my talk in favor of giving more time
for other more tangible discussions.

-- 
Josh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 06/12] livepatch: Simplify API by removing registration step
  2018-10-23 16:39                   ` Josh Poimboeuf
@ 2018-10-24  2:55                     ` Josh Poimboeuf
  2018-10-24 11:14                     ` Petr Mladek
  1 sibling, 0 replies; 34+ messages in thread
From: Josh Poimboeuf @ 2018-10-24  2:55 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Miroslav Benes, Jiri Kosina, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Tue, Oct 23, 2018 at 11:39:43AM -0500, Josh Poimboeuf wrote:
> Given this discussion, I'm thinking there wouldn't be much to discuss at
> LPC for this topic unless we had a prototype to look at (which I won't
> have time to do).  So I may drop my talk in favor of giving more time
> for other more tangible discussions.

I dropped my talk to give us time for the other topics, and I somewhat
arbitrarily added 5 minutes to each of the first 3 talks.  Suggestions
welcome.

-- 
Josh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v12 06/12] livepatch: Simplify API by removing registration step
  2018-10-23 16:39                   ` Josh Poimboeuf
  2018-10-24  2:55                     ` Josh Poimboeuf
@ 2018-10-24 11:14                     ` Petr Mladek
  1 sibling, 0 replies; 34+ messages in thread
From: Petr Mladek @ 2018-10-24 11:14 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Miroslav Benes, Jiri Kosina, Jason Baron, Joe Lawrence,
	Jessica Yu, Evgenii Shatokhin, live-patching, linux-kernel

On Tue 2018-10-23 11:39:43, Josh Poimboeuf wrote:
> On Mon, Oct 22, 2018 at 03:25:10PM +0200, Petr Mladek wrote:
> > On Fri 2018-10-19 09:36:04, Josh Poimboeuf wrote:
> > > On Fri, Oct 19, 2018 at 02:16:19PM +0200, Miroslav Benes wrote:
> > > > On Thu, 18 Oct 2018, Josh Poimboeuf wrote:
> > > > > As long as we're talking about radical changes... how about we just
> > > > > don't allow disabling patches at all?  Instead a patch can be replaced
> > > > > with a 'revert' patch, or an empty 'nop' patch.  That would make our
> > > > > code simpler and also ensure there's an audit trail.
> > > > > 
> > The revert operation allows to remove a livepatch stuck in the
> > transition without forcing.
> 
> True, though I question the real world value of that.

We ended in this situation few times with kGraft when a kthread
was not annotated and migrated. We have not seen this with upstream
livepatch code yet but we shipped first product with it only few
months ago.

I would say that it is nice to have but it is not must to have.


> > One big problem would be how to keep the system consistent. You
> > would need to solve races between loading modules and livepatches
> > anyway.
> > 
> > For example, you could not load fixed/patched modules when the system
> > is not fully patched yet. You would need to load the module and
> > the related livepatch at the same time and follow the consistency
> > model as we do now.
> 
> Yeah, I think that's pretty much the crazy idea Miroslav mentioned.  The
> patch would consist of several modules.  The parent module would
> register the patch and patch vmlinux.  Each child module would be
> associated with a to-be-patched module.  The child modules could be
> loaded on demand, either by special klp code or by modprobe.

Yup, something like this.

> As you described, there would be some races to think about.  However, it
> would also have some benefits.
> 
> I *hope* it would mean we could get rid of a lot of our ugly hacks, like
> 
> - klp symbols, klp relas

The access to external static symbols would still need so klp-specific
relocations.

> - preserving ELF data, PLT's, other horrible arch-specific things
> - klp.arch..altinstructions, klp.arch..parainstructions
> - manually calling apply_relocate_add()

Yup, these might be candidates to go.


> However... we might still need some of those things for another reason:
> to bypass exported symbol protections.  It needs some more
> investigation.
> 
> Given this discussion, I'm thinking there wouldn't be much to discuss at
> LPC for this topic unless we had a prototype to look at (which I won't
> have time to do).  So I may drop my talk in favor of giving more time
> for other more tangible discussions.

Sounds reasonable. At least I would not be able to say much more about
it without seeing a more detailed proposal and ideally a prototype
code. That said, I definitely do not want to discourage you from
playing with the idea.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2018-10-24 11:14 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-28 14:35 [PATCH v12 00/12] Petr Mladek
2018-08-28 14:35 ` [PATCH v12 01/12] livepatch: Change void *new_func -> unsigned long new_addr in struct klp_func Petr Mladek
2018-08-31  8:37   ` Miroslav Benes
2018-08-28 14:35 ` [PATCH v12 02/12] livepatch: Helper macros to define livepatch structures Petr Mladek
2018-08-28 14:35 ` [PATCH v12 03/12] livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code Petr Mladek
2018-08-31  8:38   ` Miroslav Benes
2018-08-28 14:35 ` [PATCH v12 04/12] livepatch: Consolidate klp_free functions Petr Mladek
2018-08-31 10:39   ` Miroslav Benes
2018-10-12 11:43     ` Petr Mladek
2018-08-28 14:35 ` [PATCH v12 05/12] livepatch: Refuse to unload only livepatches available during a forced transition Petr Mladek
2018-08-28 14:35 ` [PATCH v12 06/12] livepatch: Simplify API by removing registration step Petr Mladek
2018-09-05  9:34   ` Miroslav Benes
2018-10-12 13:01     ` Petr Mladek
2018-10-15 16:01       ` Miroslav Benes
2018-10-18 14:54         ` Petr Mladek
2018-10-18 15:30           ` Josh Poimboeuf
2018-10-19 12:16             ` Miroslav Benes
2018-10-19 14:36               ` Josh Poimboeuf
2018-10-22 13:25                 ` Petr Mladek
2018-10-23 16:39                   ` Josh Poimboeuf
2018-10-24  2:55                     ` Josh Poimboeuf
2018-10-24 11:14                     ` Petr Mladek
2018-08-28 14:35 ` [PATCH v12 07/12] livepatch: Use lists to manage patches, objects and functions Petr Mladek
2018-09-03 16:00   ` Miroslav Benes
2018-10-12 12:12     ` Petr Mladek
2018-08-28 14:35 ` [PATCH v12 08/12] livepatch: Add atomic replace Petr Mladek
2018-08-28 14:36 ` [PATCH v12 09/12] livepatch: Remove Nop structures when unused Petr Mladek
2018-09-04 14:50   ` Miroslav Benes
2018-08-28 14:36 ` [PATCH v12 10/12] livepatch: Atomic replace and cumulative patches documentation Petr Mladek
2018-09-04 15:15   ` Miroslav Benes
2018-08-28 14:36 ` [PATCH v12 11/12] livepatch: Remove ordering and refuse loading conflicting patches Petr Mladek
2018-08-28 14:36 ` [PATCH v12 12/12] selftests/livepatch: introduce tests Petr Mladek
2018-08-30 11:58 ` [PATCH v12 00/12] Miroslav Benes
2018-10-11 12:48   ` Petr Mladek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).