linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v15 00/11] livepatch: Atomic replace feature
@ 2019-01-09 12:43 Petr Mladek
  2019-01-09 12:43 ` [PATCH v15 01/11] livepatch: Change unsigned long old_addr -> void *old_func in struct klp_func Petr Mladek
                   ` (12 more replies)
  0 siblings, 13 replies; 20+ messages in thread
From: Petr Mladek @ 2019-01-09 12:43 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Evgenii Shatokhin, live-patching,
	linux-kernel, Petr Mladek

Hi,

The atomic replace allows to create cumulative patches. They
are useful when you maintain many livepatches and want to remove
one that is lower on the stack. In addition it is very useful when
more patches touch the same function and there are dependencies
between them.

I kindly ask reviewers to concentrate on bugs and really confusing
names, comments, or commit messages. Even simple changes are not
always simple when rebasing. Also review is not trivial.

Feel free to report any problems and ideas. But I suggest
to do any non-critical changes on top of this patchset
when possible.

This is also why I did not add missing kzalloc error handling
and did not fix non-static warnings in the selftests. They
are not critical. And the discussion about the fixes
in sample directory was quite long.

PS: I am sorry that I lost nerves with the last review.
It was perfect time to report cosmetic problems. I just
was a bit overloaded and tired. Also it was a bit sad to
realize how bad my English was. I am going to hide under
some stone now.


Changes against v14:

  + Rename klp_active -> klp_added [Josh]

  + Do not clear klp_added variable in klp_free*() [Josh]

  + Rename klp_init_patch_before_free() ->  klp_init_patch_early()
    [Josh]

  + Do not call klp_free_patch() when
    kobject_init_and_add(&patch->kobj) fails in 3rd patch.
    The final code stays the same after the refactoring
    in 5th patch. [Josh]

  + Better 4th patch subject [Josh]

  + Use safe iterators in klp_free() functions already in 7th
    patch instead of in 8th [Josh]

  + Rename free_all -> nops_only and revert the logic in
    klp_free*() and klp_unpatch_object*() functions [Josh]

  + Do not refuse loading conflicting patches. Just remove
    the ordering (stacking) [Josh]

  + fixed typos, improved wording [all]


Changes against v13:

  + Rename old_addr -> old_func instead of new_func -> new_addr. [Josh]

  + Do not add the helper macros to define structures. [Miroslav, Josh]

  + Add custom kobj_alive flag to reliably handle kobj state. [Miroslav]

  + Avoid renaming .forced flag to .module_put by calling klp_free
    functions only with taken module reference. [Josh]

  + Use list_add_tail() instead of list_add() when updating the dynamic
    lists of klp_object and klp_func structures. Note that this
    required also updating the order of messages from the pre/post
    callbacks in the selftest. [Josh, Miroslav]

  + Do not unnecessarily initialize ret variable in klp_add_nops(). [Miroslav]

  + Got rid of klp_discard_replaced_stuff(). [Josh]

  + Updated commit messages, comments and documentation, especially
    the section "Livepatch life-cycle" [Josh, Miroslav]


Changes against v12:

  + Finish freeing the patch using workqueues to prevent
    deadlock against kobject code.

  + Check for valid pointers when initializing the dynamic
    lists objects and functions.

  + Mark klp_free_objects_dynamic() static.

  + Improved documentation and fixed typos


Changes against v11:

  + Functional changes:

    + Livepatches get automatically unregistered when disabled.
      Note that the sysfs interface disappears at this point.
      It simplifies the API and code. The only drawback is that
      the patch can be enabled again only by reloading the module.

    + Refuse to load conflicting patches. The same function can
      be patched again only by a new cumulative patch that
      replaces all older ones.

    + Non-conflicting patches can be loaded and disabled in any
      order.
      

  + API related changes:

     + Change void *new_func -> unsigned long new_addr in
       struct klp_func.

     + Several new macros to hide implementation details and
       avoid casting when defining struct klp-func and klp_object.

     + Remove obsolete klp_register_patch() klp_unregister_patch() API


  + Change in selftest against v4:

     + Use new macros to define struct klp_func and klp_object.

     + Remove klp_register_patch()/klp_unregister_patch() calls.

     + Replace load_mod() + wait_for_transition() with three
       variants load_mod(), load_lp(), load_lp_nowait(). IMHO,
       it is easier to use because we need to detect the end
       of transaction another way after disable_lp() now.

     + Replaced unload_mod() with two variants unload_mod(),
       unload_lp() to match the above change.

     + Wait for the end of transition in disable_lp()
       instead of the unreliable check of the sysfs interface.

     Note that I did not touch the logs with expected result.
     They stay exactly the same as in v4 posted by Joe.
     I hope that it is a good sign ;-)


Changes against v10:

  + Bug fixes and functional changes:
    + Handle Nops in klp_ftrace_handled() to avoid infinite loop [Mirek]
    + Really add dynamically allocated klp_object into the list [Petr]
    + Clear patch->replace when transition finishes [Josh]

  + Refactoring and clean up [Josh]:
    + Replace enum types with bools
    + Avoid using ERR_PTR
    + Remove too paranoid warnings
    + Distinguish registered patches by a flag instead of a list
    + Squash some functions
    + Update comments, documentation, and commit messages
    + Squashed and split patches to do more controversial changes later

Changes against v9:

  + Fixed check of valid NOPs for already loaded objects,
    regression introduced in v9 [Joe, Mirek]
  + Allow to replace even disabled patches [Evgenii]

Changes against v8:

  + Fixed handling of statically defined struct klp_object
    with empty array of functions [Joe, Mirek]
  + Removed redundant func->new_func assignment for NOPs [Mirek]
  + Improved some wording [Mirek]

Changes against v7:

  + Fixed handling of NOPs for not-yet-loaded modules
  + Made klp_replaced_patches list static [Mirek]
  + Made klp_free_object() public later [Mirek]
  + Fixed several reported typos [Mirek, Joe]
  + Updated documentation according to the feedback [Joe]
  + Added some Acks [Mirek]

Changes against v6:

  + used list_move when disabling replaced patches [Jason]
  + renamed KLP_FUNC_ORIGINAL -> KLP_FUNC_STATIC [Mirek]
  + used klp_is_func_type() in klp_unpatch_object() [Mirek]
  + moved static definition of klp_get_or_add_object() [Mirek]
  + updated comment about synchronization in forced mode [Mirek]
  + added user documentation
  + fixed several typos


Jason Baron (2):
  livepatch: Use lists to manage patches, objects and functions
  livepatch: Add atomic replace

Joe Lawrence (1):
  selftests/livepatch: introduce tests

Petr Mladek (8):
  livepatch: Change unsigned long old_addr -> void *old_func in struct
    klp_func
  livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code
  livepatch: Consolidate klp_free functions
  livepatch: Don't block the removal of patches loaded after a forced
    transition
  livepatch: Simplify API by removing registration step
  livepatch: Remove Nop structures when unused
  livepatch: Atomic replace and cumulative patches documentation
  livepatch: Remove ordering (stacking) of the livepatches

 Documentation/livepatch/callbacks.txt              | 489 +------------
 Documentation/livepatch/cumulative-patches.txt     | 102 +++
 Documentation/livepatch/livepatch.txt              | 167 +++--
 MAINTAINERS                                        |   1 +
 include/linux/livepatch.h                          |  50 +-
 kernel/livepatch/core.c                            | 790 +++++++++++++--------
 kernel/livepatch/core.h                            |   5 +
 kernel/livepatch/patch.c                           |  57 +-
 kernel/livepatch/patch.h                           |   5 +-
 kernel/livepatch/transition.c                      |  34 +-
 lib/Kconfig.debug                                  |  22 +-
 lib/Makefile                                       |   2 +
 lib/livepatch/Makefile                             |  15 +
 lib/livepatch/test_klp_atomic_replace.c            |  57 ++
 lib/livepatch/test_klp_callbacks_busy.c            |  43 ++
 lib/livepatch/test_klp_callbacks_demo.c            | 121 ++++
 lib/livepatch/test_klp_callbacks_demo2.c           |  93 +++
 lib/livepatch/test_klp_callbacks_mod.c             |  24 +
 lib/livepatch/test_klp_livepatch.c                 |  51 ++
 lib/livepatch/test_klp_shadow_vars.c               | 236 ++++++
 samples/livepatch/livepatch-callbacks-demo.c       |  13 +-
 samples/livepatch/livepatch-sample.c               |  13 +-
 samples/livepatch/livepatch-shadow-fix1.c          |  14 +-
 samples/livepatch/livepatch-shadow-fix2.c          |  14 +-
 tools/testing/selftests/Makefile                   |   1 +
 tools/testing/selftests/livepatch/Makefile         |   8 +
 tools/testing/selftests/livepatch/README           |  43 ++
 tools/testing/selftests/livepatch/config           |   1 +
 tools/testing/selftests/livepatch/functions.sh     | 203 ++++++
 .../testing/selftests/livepatch/test-callbacks.sh  | 587 +++++++++++++++
 .../testing/selftests/livepatch/test-livepatch.sh  | 168 +++++
 .../selftests/livepatch/test-shadow-vars.sh        |  60 ++
 32 files changed, 2546 insertions(+), 943 deletions(-)
 create mode 100644 Documentation/livepatch/cumulative-patches.txt
 create mode 100644 lib/livepatch/Makefile
 create mode 100644 lib/livepatch/test_klp_atomic_replace.c
 create mode 100644 lib/livepatch/test_klp_callbacks_busy.c
 create mode 100644 lib/livepatch/test_klp_callbacks_demo.c
 create mode 100644 lib/livepatch/test_klp_callbacks_demo2.c
 create mode 100644 lib/livepatch/test_klp_callbacks_mod.c
 create mode 100644 lib/livepatch/test_klp_livepatch.c
 create mode 100644 lib/livepatch/test_klp_shadow_vars.c
 create mode 100644 tools/testing/selftests/livepatch/Makefile
 create mode 100644 tools/testing/selftests/livepatch/README
 create mode 100644 tools/testing/selftests/livepatch/config
 create mode 100644 tools/testing/selftests/livepatch/functions.sh
 create mode 100755 tools/testing/selftests/livepatch/test-callbacks.sh
 create mode 100755 tools/testing/selftests/livepatch/test-livepatch.sh
 create mode 100755 tools/testing/selftests/livepatch/test-shadow-vars.sh

-- 
2.13.7


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v15 01/11] livepatch: Change unsigned long old_addr -> void *old_func in struct klp_func
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
@ 2019-01-09 12:43 ` Petr Mladek
  2019-01-09 12:43 ` [PATCH v15 02/11] livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code Petr Mladek
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Petr Mladek @ 2019-01-09 12:43 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Evgenii Shatokhin, live-patching,
	linux-kernel, Petr Mladek

The address of the to be patched function and new function is stored
in struct klp_func as:

	void *new_func;
	unsigned long old_addr;

The different naming scheme and type are derived from the way
the addresses are set. @old_addr is assigned at runtime using
kallsyms-based search. @new_func is statically initialized,
for example:

  static struct klp_func funcs[] = {
	{
		.old_name = "cmdline_proc_show",
		.new_func = livepatch_cmdline_proc_show,
	}, { }
  };

This patch changes unsigned long old_addr -> void *old_func. It removes
some confusion when these address are later used in the code. It is
motivated by a followup patch that adds special NOP struct klp_func
where we want to assign func->new_func = func->old_addr respectively
func->new_func = func->old_func.

This patch does not modify the existing behavior.

Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
Acked-by: Alice Ferrazzi <alice.ferrazzi@gmail.com>
---
 include/linux/livepatch.h     |  4 ++--
 kernel/livepatch/core.c       |  6 +++---
 kernel/livepatch/patch.c      | 18 ++++++++++--------
 kernel/livepatch/patch.h      |  4 ++--
 kernel/livepatch/transition.c |  4 ++--
 5 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index aec44b1d9582..634e13876380 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -40,7 +40,7 @@
  * @new_func:	pointer to the patched function code
  * @old_sympos: a hint indicating which symbol position the old function
  *		can be found (optional)
- * @old_addr:	the address of the function being patched
+ * @old_func:	pointer to the function being patched
  * @kobj:	kobject for sysfs resources
  * @stack_node:	list node for klp_ops func_stack list
  * @old_size:	size of the old function
@@ -77,7 +77,7 @@ struct klp_func {
 	unsigned long old_sympos;
 
 	/* internal */
-	unsigned long old_addr;
+	void *old_func;
 	struct kobject kobj;
 	struct list_head stack_node;
 	unsigned long old_size, new_size;
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 5b77a7314e01..cb59c7fb94cb 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -648,7 +648,7 @@ static void klp_free_object_loaded(struct klp_object *obj)
 	obj->mod = NULL;
 
 	klp_for_each_func(obj, func)
-		func->old_addr = 0;
+		func->old_func = NULL;
 }
 
 /*
@@ -721,11 +721,11 @@ static int klp_init_object_loaded(struct klp_patch *patch,
 	klp_for_each_func(obj, func) {
 		ret = klp_find_object_symbol(obj->name, func->old_name,
 					     func->old_sympos,
-					     &func->old_addr);
+					     (unsigned long *)&func->old_func);
 		if (ret)
 			return ret;
 
-		ret = kallsyms_lookup_size_offset(func->old_addr,
+		ret = kallsyms_lookup_size_offset((unsigned long)func->old_func,
 						  &func->old_size, NULL);
 		if (!ret) {
 			pr_err("kallsyms size lookup failed for '%s'\n",
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 7702cb4064fc..825022d70912 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -34,7 +34,7 @@
 
 static LIST_HEAD(klp_ops);
 
-struct klp_ops *klp_find_ops(unsigned long old_addr)
+struct klp_ops *klp_find_ops(void *old_func)
 {
 	struct klp_ops *ops;
 	struct klp_func *func;
@@ -42,7 +42,7 @@ struct klp_ops *klp_find_ops(unsigned long old_addr)
 	list_for_each_entry(ops, &klp_ops, node) {
 		func = list_first_entry(&ops->func_stack, struct klp_func,
 					stack_node);
-		if (func->old_addr == old_addr)
+		if (func->old_func == old_func)
 			return ops;
 	}
 
@@ -142,17 +142,18 @@ static void klp_unpatch_func(struct klp_func *func)
 
 	if (WARN_ON(!func->patched))
 		return;
-	if (WARN_ON(!func->old_addr))
+	if (WARN_ON(!func->old_func))
 		return;
 
-	ops = klp_find_ops(func->old_addr);
+	ops = klp_find_ops(func->old_func);
 	if (WARN_ON(!ops))
 		return;
 
 	if (list_is_singular(&ops->func_stack)) {
 		unsigned long ftrace_loc;
 
-		ftrace_loc = klp_get_ftrace_location(func->old_addr);
+		ftrace_loc =
+			klp_get_ftrace_location((unsigned long)func->old_func);
 		if (WARN_ON(!ftrace_loc))
 			return;
 
@@ -174,17 +175,18 @@ static int klp_patch_func(struct klp_func *func)
 	struct klp_ops *ops;
 	int ret;
 
-	if (WARN_ON(!func->old_addr))
+	if (WARN_ON(!func->old_func))
 		return -EINVAL;
 
 	if (WARN_ON(func->patched))
 		return -EINVAL;
 
-	ops = klp_find_ops(func->old_addr);
+	ops = klp_find_ops(func->old_func);
 	if (!ops) {
 		unsigned long ftrace_loc;
 
-		ftrace_loc = klp_get_ftrace_location(func->old_addr);
+		ftrace_loc =
+			klp_get_ftrace_location((unsigned long)func->old_func);
 		if (!ftrace_loc) {
 			pr_err("failed to find location for function '%s'\n",
 				func->old_name);
diff --git a/kernel/livepatch/patch.h b/kernel/livepatch/patch.h
index e72d8250d04b..a9b16e513656 100644
--- a/kernel/livepatch/patch.h
+++ b/kernel/livepatch/patch.h
@@ -10,7 +10,7 @@
  * struct klp_ops - structure for tracking registered ftrace ops structs
  *
  * A single ftrace_ops is shared between all enabled replacement functions
- * (klp_func structs) which have the same old_addr.  This allows the switch
+ * (klp_func structs) which have the same old_func.  This allows the switch
  * between function versions to happen instantaneously by updating the klp_ops
  * struct's func_stack list.  The winner is the klp_func at the top of the
  * func_stack (front of the list).
@@ -25,7 +25,7 @@ struct klp_ops {
 	struct ftrace_ops fops;
 };
 
-struct klp_ops *klp_find_ops(unsigned long old_addr);
+struct klp_ops *klp_find_ops(void *old_func);
 
 int klp_patch_object(struct klp_object *obj);
 void klp_unpatch_object(struct klp_object *obj);
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index 304d5eb8a98c..f27a378ad5e1 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -224,11 +224,11 @@ static int klp_check_stack_func(struct klp_func *func,
 			 * Check for the to-be-patched function
 			 * (the previous func).
 			 */
-			ops = klp_find_ops(func->old_addr);
+			ops = klp_find_ops(func->old_func);
 
 			if (list_is_singular(&ops->func_stack)) {
 				/* original function */
-				func_addr = func->old_addr;
+				func_addr = (unsigned long)func->old_func;
 				func_size = func->old_size;
 			} else {
 				/* previously patched function */
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v15 02/11] livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
  2019-01-09 12:43 ` [PATCH v15 01/11] livepatch: Change unsigned long old_addr -> void *old_func in struct klp_func Petr Mladek
@ 2019-01-09 12:43 ` Petr Mladek
  2019-01-09 12:43 ` [PATCH v15 03/11] livepatch: Consolidate klp_free functions Petr Mladek
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Petr Mladek @ 2019-01-09 12:43 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Evgenii Shatokhin, live-patching,
	linux-kernel, Petr Mladek

We are going to simplify the API and code by removing the registration
step. This would require calling init/free functions from enable/disable
ones.

This patch just moves the code to prevent more forward declarations.

This patch does not change the code except for two forward declarations.

Signed-off-by: Petr Mladek <pmladek@suse.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
---
 kernel/livepatch/core.c | 330 ++++++++++++++++++++++++------------------------
 1 file changed, 166 insertions(+), 164 deletions(-)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index cb59c7fb94cb..20589da35194 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -278,170 +278,6 @@ static int klp_write_object_relocations(struct module *pmod,
 	return ret;
 }
 
-static int __klp_disable_patch(struct klp_patch *patch)
-{
-	struct klp_object *obj;
-
-	if (WARN_ON(!patch->enabled))
-		return -EINVAL;
-
-	if (klp_transition_patch)
-		return -EBUSY;
-
-	/* enforce stacking: only the last enabled patch can be disabled */
-	if (!list_is_last(&patch->list, &klp_patches) &&
-	    list_next_entry(patch, list)->enabled)
-		return -EBUSY;
-
-	klp_init_transition(patch, KLP_UNPATCHED);
-
-	klp_for_each_object(patch, obj)
-		if (obj->patched)
-			klp_pre_unpatch_callback(obj);
-
-	/*
-	 * Enforce the order of the func->transition writes in
-	 * klp_init_transition() and the TIF_PATCH_PENDING writes in
-	 * klp_start_transition().  In the rare case where klp_ftrace_handler()
-	 * is called shortly after klp_update_patch_state() switches the task,
-	 * this ensures the handler sees that func->transition is set.
-	 */
-	smp_wmb();
-
-	klp_start_transition();
-	klp_try_complete_transition();
-	patch->enabled = false;
-
-	return 0;
-}
-
-/**
- * klp_disable_patch() - disables a registered patch
- * @patch:	The registered, enabled patch to be disabled
- *
- * Unregisters the patched functions from ftrace.
- *
- * Return: 0 on success, otherwise error
- */
-int klp_disable_patch(struct klp_patch *patch)
-{
-	int ret;
-
-	mutex_lock(&klp_mutex);
-
-	if (!klp_is_patch_registered(patch)) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	if (!patch->enabled) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	ret = __klp_disable_patch(patch);
-
-err:
-	mutex_unlock(&klp_mutex);
-	return ret;
-}
-EXPORT_SYMBOL_GPL(klp_disable_patch);
-
-static int __klp_enable_patch(struct klp_patch *patch)
-{
-	struct klp_object *obj;
-	int ret;
-
-	if (klp_transition_patch)
-		return -EBUSY;
-
-	if (WARN_ON(patch->enabled))
-		return -EINVAL;
-
-	/* enforce stacking: only the first disabled patch can be enabled */
-	if (patch->list.prev != &klp_patches &&
-	    !list_prev_entry(patch, list)->enabled)
-		return -EBUSY;
-
-	/*
-	 * A reference is taken on the patch module to prevent it from being
-	 * unloaded.
-	 */
-	if (!try_module_get(patch->mod))
-		return -ENODEV;
-
-	pr_notice("enabling patch '%s'\n", patch->mod->name);
-
-	klp_init_transition(patch, KLP_PATCHED);
-
-	/*
-	 * Enforce the order of the func->transition writes in
-	 * klp_init_transition() and the ops->func_stack writes in
-	 * klp_patch_object(), so that klp_ftrace_handler() will see the
-	 * func->transition updates before the handler is registered and the
-	 * new funcs become visible to the handler.
-	 */
-	smp_wmb();
-
-	klp_for_each_object(patch, obj) {
-		if (!klp_is_object_loaded(obj))
-			continue;
-
-		ret = klp_pre_patch_callback(obj);
-		if (ret) {
-			pr_warn("pre-patch callback failed for object '%s'\n",
-				klp_is_module(obj) ? obj->name : "vmlinux");
-			goto err;
-		}
-
-		ret = klp_patch_object(obj);
-		if (ret) {
-			pr_warn("failed to patch object '%s'\n",
-				klp_is_module(obj) ? obj->name : "vmlinux");
-			goto err;
-		}
-	}
-
-	klp_start_transition();
-	klp_try_complete_transition();
-	patch->enabled = true;
-
-	return 0;
-err:
-	pr_warn("failed to enable patch '%s'\n", patch->mod->name);
-
-	klp_cancel_transition();
-	return ret;
-}
-
-/**
- * klp_enable_patch() - enables a registered patch
- * @patch:	The registered, disabled patch to be enabled
- *
- * Performs the needed symbol lookups and code relocations,
- * then registers the patched functions with ftrace.
- *
- * Return: 0 on success, otherwise error
- */
-int klp_enable_patch(struct klp_patch *patch)
-{
-	int ret;
-
-	mutex_lock(&klp_mutex);
-
-	if (!klp_is_patch_registered(patch)) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	ret = __klp_enable_patch(patch);
-
-err:
-	mutex_unlock(&klp_mutex);
-	return ret;
-}
-EXPORT_SYMBOL_GPL(klp_enable_patch);
-
 /*
  * Sysfs Interface
  *
@@ -454,6 +290,8 @@ EXPORT_SYMBOL_GPL(klp_enable_patch);
  * /sys/kernel/livepatch/<patch>/<object>
  * /sys/kernel/livepatch/<patch>/<object>/<function,sympos>
  */
+static int __klp_disable_patch(struct klp_patch *patch);
+static int __klp_enable_patch(struct klp_patch *patch);
 
 static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
 			     const char *buf, size_t count)
@@ -904,6 +742,170 @@ int klp_register_patch(struct klp_patch *patch)
 }
 EXPORT_SYMBOL_GPL(klp_register_patch);
 
+static int __klp_disable_patch(struct klp_patch *patch)
+{
+	struct klp_object *obj;
+
+	if (WARN_ON(!patch->enabled))
+		return -EINVAL;
+
+	if (klp_transition_patch)
+		return -EBUSY;
+
+	/* enforce stacking: only the last enabled patch can be disabled */
+	if (!list_is_last(&patch->list, &klp_patches) &&
+	    list_next_entry(patch, list)->enabled)
+		return -EBUSY;
+
+	klp_init_transition(patch, KLP_UNPATCHED);
+
+	klp_for_each_object(patch, obj)
+		if (obj->patched)
+			klp_pre_unpatch_callback(obj);
+
+	/*
+	 * Enforce the order of the func->transition writes in
+	 * klp_init_transition() and the TIF_PATCH_PENDING writes in
+	 * klp_start_transition().  In the rare case where klp_ftrace_handler()
+	 * is called shortly after klp_update_patch_state() switches the task,
+	 * this ensures the handler sees that func->transition is set.
+	 */
+	smp_wmb();
+
+	klp_start_transition();
+	klp_try_complete_transition();
+	patch->enabled = false;
+
+	return 0;
+}
+
+/**
+ * klp_disable_patch() - disables a registered patch
+ * @patch:	The registered, enabled patch to be disabled
+ *
+ * Unregisters the patched functions from ftrace.
+ *
+ * Return: 0 on success, otherwise error
+ */
+int klp_disable_patch(struct klp_patch *patch)
+{
+	int ret;
+
+	mutex_lock(&klp_mutex);
+
+	if (!klp_is_patch_registered(patch)) {
+		ret = -EINVAL;
+		goto err;
+	}
+
+	if (!patch->enabled) {
+		ret = -EINVAL;
+		goto err;
+	}
+
+	ret = __klp_disable_patch(patch);
+
+err:
+	mutex_unlock(&klp_mutex);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(klp_disable_patch);
+
+static int __klp_enable_patch(struct klp_patch *patch)
+{
+	struct klp_object *obj;
+	int ret;
+
+	if (klp_transition_patch)
+		return -EBUSY;
+
+	if (WARN_ON(patch->enabled))
+		return -EINVAL;
+
+	/* enforce stacking: only the first disabled patch can be enabled */
+	if (patch->list.prev != &klp_patches &&
+	    !list_prev_entry(patch, list)->enabled)
+		return -EBUSY;
+
+	/*
+	 * A reference is taken on the patch module to prevent it from being
+	 * unloaded.
+	 */
+	if (!try_module_get(patch->mod))
+		return -ENODEV;
+
+	pr_notice("enabling patch '%s'\n", patch->mod->name);
+
+	klp_init_transition(patch, KLP_PATCHED);
+
+	/*
+	 * Enforce the order of the func->transition writes in
+	 * klp_init_transition() and the ops->func_stack writes in
+	 * klp_patch_object(), so that klp_ftrace_handler() will see the
+	 * func->transition updates before the handler is registered and the
+	 * new funcs become visible to the handler.
+	 */
+	smp_wmb();
+
+	klp_for_each_object(patch, obj) {
+		if (!klp_is_object_loaded(obj))
+			continue;
+
+		ret = klp_pre_patch_callback(obj);
+		if (ret) {
+			pr_warn("pre-patch callback failed for object '%s'\n",
+				klp_is_module(obj) ? obj->name : "vmlinux");
+			goto err;
+		}
+
+		ret = klp_patch_object(obj);
+		if (ret) {
+			pr_warn("failed to patch object '%s'\n",
+				klp_is_module(obj) ? obj->name : "vmlinux");
+			goto err;
+		}
+	}
+
+	klp_start_transition();
+	klp_try_complete_transition();
+	patch->enabled = true;
+
+	return 0;
+err:
+	pr_warn("failed to enable patch '%s'\n", patch->mod->name);
+
+	klp_cancel_transition();
+	return ret;
+}
+
+/**
+ * klp_enable_patch() - enables a registered patch
+ * @patch:	The registered, disabled patch to be enabled
+ *
+ * Performs the needed symbol lookups and code relocations,
+ * then registers the patched functions with ftrace.
+ *
+ * Return: 0 on success, otherwise error
+ */
+int klp_enable_patch(struct klp_patch *patch)
+{
+	int ret;
+
+	mutex_lock(&klp_mutex);
+
+	if (!klp_is_patch_registered(patch)) {
+		ret = -EINVAL;
+		goto err;
+	}
+
+	ret = __klp_enable_patch(patch);
+
+err:
+	mutex_unlock(&klp_mutex);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(klp_enable_patch);
+
 /*
  * Remove parts of patches that touch a given kernel module. The list of
  * patches processed might be limited. When limit is NULL, all patches
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v15 03/11] livepatch: Consolidate klp_free functions
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
  2019-01-09 12:43 ` [PATCH v15 01/11] livepatch: Change unsigned long old_addr -> void *old_func in struct klp_func Petr Mladek
  2019-01-09 12:43 ` [PATCH v15 02/11] livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code Petr Mladek
@ 2019-01-09 12:43 ` Petr Mladek
  2019-01-09 14:35   ` Miroslav Benes
  2019-01-09 12:43 ` [PATCH v15 04/11] livepatch: Don't block the removal of patches loaded after a forced transition Petr Mladek
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 20+ messages in thread
From: Petr Mladek @ 2019-01-09 12:43 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Evgenii Shatokhin, live-patching,
	linux-kernel, Petr Mladek, Jessica Yu

The code for freeing livepatch structures is a bit scattered and tricky:

  + direct calls to klp_free_*_limited() and kobject_put() are
    used to release partially initialized objects

  + klp_free_patch() removes the patch from the public list
    and releases all objects except for patch->kobj

  + object_put(&patch->kobj) and the related wait_for_completion()
    are called directly outside klp_mutex; this code is duplicated;

Now, we are going to remove the registration stage to simplify the API
and the code. This would require handling more situations in
klp_enable_patch() error paths.

More importantly, we are going to add a feature called atomic replace.
It will need to dynamically create func and object structures. We will
want to reuse the existing init() and free() functions. This would
create even more error path scenarios.

This patch implements more straightforward free functions:

  + checks kobj_added flag instead of @limit[*]

  + initializes patch->list early so that the check for empty list
    always works

  + The action(s) that has to be done outside klp_mutex are done
    in separate klp_free_patch_finish() function. It waits only
    when patch->kobj was really released via the _start() part.

The patch does not change the existing behavior.

[*] We need our own flag to track that the kobject was successfully
    added to the hierarchy.  Note that kobj.state_initialized only
    indicates that kobject has been initialized, not whether is has
    been added (and needs to be removed on cleanup).

Signed-off-by: Petr Mladek <pmladek@suse.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Miroslav Benes <mbenes@suse.cz>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Jason Baron <jbaron@akamai.com>
---
 include/linux/livepatch.h |   6 ++
 kernel/livepatch/core.c   | 137 +++++++++++++++++++++++++++++++---------------
 2 files changed, 98 insertions(+), 45 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 634e13876380..6978785bc059 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -45,6 +45,7 @@
  * @stack_node:	list node for klp_ops func_stack list
  * @old_size:	size of the old function
  * @new_size:	size of the new function
+ * @kobj_added: @kobj has been added and needs freeing
  * @patched:	the func has been added to the klp_ops list
  * @transition:	the func is currently being applied or reverted
  *
@@ -81,6 +82,7 @@ struct klp_func {
 	struct kobject kobj;
 	struct list_head stack_node;
 	unsigned long old_size, new_size;
+	bool kobj_added;
 	bool patched;
 	bool transition;
 };
@@ -117,6 +119,7 @@ struct klp_callbacks {
  * @kobj:	kobject for sysfs resources
  * @mod:	kernel module associated with the patched object
  *		(NULL for vmlinux)
+ * @kobj_added: @kobj has been added and needs freeing
  * @patched:	the object's funcs have been added to the klp_ops list
  */
 struct klp_object {
@@ -128,6 +131,7 @@ struct klp_object {
 	/* internal */
 	struct kobject kobj;
 	struct module *mod;
+	bool kobj_added;
 	bool patched;
 };
 
@@ -137,6 +141,7 @@ struct klp_object {
  * @objs:	object entries for kernel objects to be patched
  * @list:	list node for global list of registered patches
  * @kobj:	kobject for sysfs resources
+ * @kobj_added: @kobj has been added and needs freeing
  * @enabled:	the patch is enabled (but operation may be incomplete)
  * @finish:	for waiting till it is safe to remove the patch module
  */
@@ -148,6 +153,7 @@ struct klp_patch {
 	/* internal */
 	struct list_head list;
 	struct kobject kobj;
+	bool kobj_added;
 	bool enabled;
 	struct completion finish;
 };
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 20589da35194..6f0d9095f662 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -465,17 +465,15 @@ static struct kobj_type klp_ktype_func = {
 	.sysfs_ops = &kobj_sysfs_ops,
 };
 
-/*
- * Free all functions' kobjects in the array up to some limit. When limit is
- * NULL, all kobjects are freed.
- */
-static void klp_free_funcs_limited(struct klp_object *obj,
-				   struct klp_func *limit)
+static void klp_free_funcs(struct klp_object *obj)
 {
 	struct klp_func *func;
 
-	for (func = obj->funcs; func->old_name && func != limit; func++)
-		kobject_put(&func->kobj);
+	klp_for_each_func(obj, func) {
+		/* Might be called from klp_init_patch() error path. */
+		if (func->kobj_added)
+			kobject_put(&func->kobj);
+	}
 }
 
 /* Clean up when a patched object is unloaded */
@@ -489,30 +487,60 @@ static void klp_free_object_loaded(struct klp_object *obj)
 		func->old_func = NULL;
 }
 
-/*
- * Free all objects' kobjects in the array up to some limit. When limit is
- * NULL, all kobjects are freed.
- */
-static void klp_free_objects_limited(struct klp_patch *patch,
-				     struct klp_object *limit)
+static void klp_free_objects(struct klp_patch *patch)
 {
 	struct klp_object *obj;
 
-	for (obj = patch->objs; obj->funcs && obj != limit; obj++) {
-		klp_free_funcs_limited(obj, NULL);
-		kobject_put(&obj->kobj);
+	klp_for_each_object(patch, obj) {
+		klp_free_funcs(obj);
+
+		/* Might be called from klp_init_patch() error path. */
+		if (obj->kobj_added)
+			kobject_put(&obj->kobj);
 	}
 }
 
-static void klp_free_patch(struct klp_patch *patch)
+/*
+ * This function implements the free operations that can be called safely
+ * under klp_mutex.
+ *
+ * The operation must be completed by calling klp_free_patch_finish()
+ * outside klp_mutex.
+ */
+static void klp_free_patch_start(struct klp_patch *patch)
 {
-	klp_free_objects_limited(patch, NULL);
 	if (!list_empty(&patch->list))
 		list_del(&patch->list);
+
+	klp_free_objects(patch);
+}
+
+/*
+ * This function implements the free part that must be called outside
+ * klp_mutex.
+ *
+ * It must be called after klp_free_patch_start(). And it has to be
+ * the last function accessing the livepatch structures when the patch
+ * gets disabled.
+ */
+static void klp_free_patch_finish(struct klp_patch *patch)
+{
+	/*
+	 * Avoid deadlock with enabled_store() sysfs callback by
+	 * calling this outside klp_mutex. It is safe because
+	 * this is called when the patch gets disabled and it
+	 * cannot get enabled again.
+	 */
+	if (patch->kobj_added) {
+		kobject_put(&patch->kobj);
+		wait_for_completion(&patch->finish);
+	}
 }
 
 static int klp_init_func(struct klp_object *obj, struct klp_func *func)
 {
+	int ret;
+
 	if (!func->old_name || !func->new_func)
 		return -EINVAL;
 
@@ -528,9 +556,13 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
 	 * object. If the user selects 0 for old_sympos, then 1 will be used
 	 * since a unique symbol will be the first occurrence.
 	 */
-	return kobject_init_and_add(&func->kobj, &klp_ktype_func,
-				    &obj->kobj, "%s,%lu", func->old_name,
-				    func->old_sympos ? func->old_sympos : 1);
+	ret = kobject_init_and_add(&func->kobj, &klp_ktype_func,
+				   &obj->kobj, "%s,%lu", func->old_name,
+				   func->old_sympos ? func->old_sympos : 1);
+	if (!ret)
+		func->kobj_added = true;
+
+	return ret;
 }
 
 /* Arches may override this to finish any remaining arch-specific tasks */
@@ -589,9 +621,6 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
 	int ret;
 	const char *name;
 
-	if (!obj->funcs)
-		return -EINVAL;
-
 	if (klp_is_module(obj) && strlen(obj->name) >= MODULE_NAME_LEN)
 		return -EINVAL;
 
@@ -605,46 +634,66 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
 				   &patch->kobj, "%s", name);
 	if (ret)
 		return ret;
+	obj->kobj_added = true;
 
 	klp_for_each_func(obj, func) {
 		ret = klp_init_func(obj, func);
 		if (ret)
-			goto free;
+			return ret;
 	}
 
-	if (klp_is_object_loaded(obj)) {
+	if (klp_is_object_loaded(obj))
 		ret = klp_init_object_loaded(patch, obj);
-		if (ret)
-			goto free;
-	}
-
-	return 0;
 
-free:
-	klp_free_funcs_limited(obj, func);
-	kobject_put(&obj->kobj);
 	return ret;
 }
 
-static int klp_init_patch(struct klp_patch *patch)
+static int klp_init_patch_early(struct klp_patch *patch)
 {
 	struct klp_object *obj;
-	int ret;
+	struct klp_func *func;
 
 	if (!patch->objs)
 		return -EINVAL;
 
-	mutex_lock(&klp_mutex);
-
+	INIT_LIST_HEAD(&patch->list);
+	patch->kobj_added = false;
 	patch->enabled = false;
 	init_completion(&patch->finish);
 
+	klp_for_each_object(patch, obj) {
+		if (!obj->funcs)
+			return -EINVAL;
+
+		obj->kobj_added = false;
+
+		klp_for_each_func(obj, func)
+			func->kobj_added = false;
+	}
+
+	return 0;
+}
+
+static int klp_init_patch(struct klp_patch *patch)
+{
+	struct klp_object *obj;
+	int ret;
+
+	mutex_lock(&klp_mutex);
+
+	ret = klp_init_patch_early(patch);
+	if (ret) {
+		mutex_unlock(&klp_mutex);
+		return ret;
+	}
+
 	ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
 				   klp_root_kobj, "%s", patch->mod->name);
 	if (ret) {
 		mutex_unlock(&klp_mutex);
 		return ret;
 	}
+	patch->kobj_added = true;
 
 	klp_for_each_object(patch, obj) {
 		ret = klp_init_object(patch, obj);
@@ -659,12 +708,11 @@ static int klp_init_patch(struct klp_patch *patch)
 	return 0;
 
 free:
-	klp_free_objects_limited(patch, obj);
+	klp_free_patch_start(patch);
 
 	mutex_unlock(&klp_mutex);
 
-	kobject_put(&patch->kobj);
-	wait_for_completion(&patch->finish);
+	klp_free_patch_finish(patch);
 
 	return ret;
 }
@@ -693,12 +741,11 @@ int klp_unregister_patch(struct klp_patch *patch)
 		goto err;
 	}
 
-	klp_free_patch(patch);
+	klp_free_patch_start(patch);
 
 	mutex_unlock(&klp_mutex);
 
-	kobject_put(&patch->kobj);
-	wait_for_completion(&patch->finish);
+	klp_free_patch_finish(patch);
 
 	return 0;
 err:
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v15 04/11] livepatch: Don't block the removal of patches loaded after a forced transition
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
                   ` (2 preceding siblings ...)
  2019-01-09 12:43 ` [PATCH v15 03/11] livepatch: Consolidate klp_free functions Petr Mladek
@ 2019-01-09 12:43 ` Petr Mladek
  2019-01-09 14:50   ` Miroslav Benes
  2019-01-09 12:43 ` [PATCH v15 05/11] livepatch: Simplify API by removing registration step Petr Mladek
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 20+ messages in thread
From: Petr Mladek @ 2019-01-09 12:43 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Evgenii Shatokhin, live-patching,
	linux-kernel, Petr Mladek

module_put() is currently never called in klp_complete_transition() when
klp_force is set. As a result, we might keep the reference count even when
klp_enable_patch() fails and klp_cancel_transition() is called.

This might give the impression that a module might get blocked in some
strange init state. Fortunately, it is not the case. The reference count
is ignored when mod->init fails and erroneous modules are always removed.

Anyway, this might be confusing. Instead, this patch moves
the global klp_forced flag into struct klp_patch. As a result,
we block only modules that might still be in use after a forced
transition. Newly loaded livepatches might be eventually completely
removed later.

It is not a big deal. But the code is at least consistent with
the reality.

Signed-off-by: Petr Mladek <pmladek@suse.com>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
---
 include/linux/livepatch.h     |  2 ++
 kernel/livepatch/core.c       |  4 +++-
 kernel/livepatch/core.h       |  1 +
 kernel/livepatch/transition.c | 10 +++++-----
 4 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 6978785bc059..6a9165d9b090 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -143,6 +143,7 @@ struct klp_object {
  * @kobj:	kobject for sysfs resources
  * @kobj_added: @kobj has been added and needs freeing
  * @enabled:	the patch is enabled (but operation may be incomplete)
+ * @forced:	was involved in a forced transition
  * @finish:	for waiting till it is safe to remove the patch module
  */
 struct klp_patch {
@@ -155,6 +156,7 @@ struct klp_patch {
 	struct kobject kobj;
 	bool kobj_added;
 	bool enabled;
+	bool forced;
 	struct completion finish;
 };
 
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 6f0d9095f662..e77c5017ae0c 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -45,7 +45,8 @@
  */
 DEFINE_MUTEX(klp_mutex);
 
-static LIST_HEAD(klp_patches);
+/* Registered patches */
+LIST_HEAD(klp_patches);
 
 static struct kobject *klp_root_kobj;
 
@@ -659,6 +660,7 @@ static int klp_init_patch_early(struct klp_patch *patch)
 	INIT_LIST_HEAD(&patch->list);
 	patch->kobj_added = false;
 	patch->enabled = false;
+	patch->forced = false;
 	init_completion(&patch->finish);
 
 	klp_for_each_object(patch, obj) {
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index 48a83d4364cf..d0cb5390e247 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -5,6 +5,7 @@
 #include <linux/livepatch.h>
 
 extern struct mutex klp_mutex;
+extern struct list_head klp_patches;
 
 static inline bool klp_is_object_loaded(struct klp_object *obj)
 {
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index f27a378ad5e1..a4c921364003 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -33,8 +33,6 @@ struct klp_patch *klp_transition_patch;
 
 static int klp_target_state = KLP_UNDEFINED;
 
-static bool klp_forced = false;
-
 /*
  * This work can be performed periodically to finish patching or unpatching any
  * "straggler" tasks which failed to transition in the first attempt.
@@ -137,10 +135,10 @@ static void klp_complete_transition(void)
 		  klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
 
 	/*
-	 * klp_forced set implies unbounded increase of module's ref count if
+	 * patch->forced set implies unbounded increase of module's ref count if
 	 * the module is disabled/enabled in a loop.
 	 */
-	if (!klp_forced && klp_target_state == KLP_UNPATCHED)
+	if (!klp_transition_patch->forced && klp_target_state == KLP_UNPATCHED)
 		module_put(klp_transition_patch->mod);
 
 	klp_target_state = KLP_UNDEFINED;
@@ -620,6 +618,7 @@ void klp_send_signals(void)
  */
 void klp_force_transition(void)
 {
+	struct klp_patch *patch;
 	struct task_struct *g, *task;
 	unsigned int cpu;
 
@@ -633,5 +632,6 @@ void klp_force_transition(void)
 	for_each_possible_cpu(cpu)
 		klp_update_patch_state(idle_task(cpu));
 
-	klp_forced = true;
+	list_for_each_entry(patch, &klp_patches, list)
+		patch->forced = true;
 }
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v15 05/11] livepatch: Simplify API by removing registration step
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
                   ` (3 preceding siblings ...)
  2019-01-09 12:43 ` [PATCH v15 04/11] livepatch: Don't block the removal of patches loaded after a forced transition Petr Mladek
@ 2019-01-09 12:43 ` Petr Mladek
  2019-01-10 12:32   ` Miroslav Benes
  2019-01-09 12:43 ` [PATCH v15 06/11] livepatch: Use lists to manage patches, objects and functions Petr Mladek
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 20+ messages in thread
From: Petr Mladek @ 2019-01-09 12:43 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Evgenii Shatokhin, live-patching,
	linux-kernel, Petr Mladek

The possibility to re-enable a registered patch was useful for immediate
patches where the livepatch module had to stay until the system reboot.
The improved consistency model allows to achieve the same result by
unloading and loading the livepatch module again.

Also we are going to add a feature called atomic replace. It will allow
to create a patch that would replace all already registered patches.
The aim is to handle dependent patches more securely. It will obsolete
the stack of patches that helped to handle the dependencies so far.
Then it might be unclear when a cumulative patch re-enabling is safe.

It would be complicated to support the many modes. Instead we could
actually make the API and code easier to understand.

Therefore, remove the two step public API. All the checks and init calls
are moved from klp_register_patch() to klp_enabled_patch(). Also the patch
is automatically freed, including the sysfs interface when the transition
to the disabled state is completed.

As a result, there is never a disabled patch on the top of the stack.
Therefore we do not need to check the stack in __klp_enable_patch().
And we could simplify the check in __klp_disable_patch().

Also the API and logic is much easier. It is enough to call
klp_enable_patch() in module_init() call. The patch can be disabled
by writing '0' into /sys/kernel/livepatch/<patch>/enabled. Then the module
can be removed once the transition finishes and sysfs interface is freed.

The only problem is how to free the structures and kobjects safely.
The operation is triggered from the sysfs interface. We could not put
the related kobject from there because it would cause lock inversion
between klp_mutex and kernfs locks, see kn->count lockdep map.

Therefore, offload the free task to a workqueue. It is perfectly fine:

  + The patch can no longer be used in the livepatch operations.

  + The module could not be removed until the free operation finishes
    and module_put() is called.

  + The operation is asynchronous already when the first
    klp_try_complete_transition() fails and another call
    is queued with a delay.

Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
---
 Documentation/livepatch/livepatch.txt        | 135 +++++--------
 include/linux/livepatch.h                    |   7 +-
 kernel/livepatch/core.c                      | 280 +++++++++------------------
 kernel/livepatch/core.h                      |   2 +
 kernel/livepatch/transition.c                |  19 +-
 samples/livepatch/livepatch-callbacks-demo.c |  13 +-
 samples/livepatch/livepatch-sample.c         |  13 +-
 samples/livepatch/livepatch-shadow-fix1.c    |  14 +-
 samples/livepatch/livepatch-shadow-fix2.c    |  14 +-
 9 files changed, 168 insertions(+), 329 deletions(-)

diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt
index 2d7ed09dbd59..8f56490a4bb6 100644
--- a/Documentation/livepatch/livepatch.txt
+++ b/Documentation/livepatch/livepatch.txt
@@ -12,12 +12,11 @@ Table of Contents:
 4. Livepatch module
    4.1. New functions
    4.2. Metadata
-   4.3. Livepatch module handling
 5. Livepatch life-cycle
-   5.1. Registration
+   5.1. Loading
    5.2. Enabling
    5.3. Disabling
-   5.4. Unregistration
+   5.4. Removing
 6. Sysfs
 7. Limitations
 
@@ -298,117 +297,89 @@ into three levels:
     see the "Consistency model" section.
 
 
-4.3. Livepatch module handling
-------------------------------
-
-The usual behavior is that the new functions will get used when
-the livepatch module is loaded. For this, the module init() function
-has to register the patch (struct klp_patch) and enable it. See the
-section "Livepatch life-cycle" below for more details about these
-two operations.
-
-Module removal is only safe when there are no users of the underlying
-functions. This is the reason why the force feature permanently disables
-the removal. The forced tasks entered the functions but we cannot say
-that they returned back.  Therefore it cannot be decided when the
-livepatch module can be safely removed. When the system is successfully
-transitioned to a new patch state (patched/unpatched) without being
-forced it is guaranteed that no task sleeps or runs in the old code.
-
-
 5. Livepatch life-cycle
 =======================
 
-Livepatching defines four basic operations that define the life cycle of each
-live patch: registration, enabling, disabling and unregistration.  There are
-several reasons why it is done this way.
-
-First, the patch is applied only when all patched symbols for already
-loaded objects are found. The error handling is much easier if this
-check is done before particular functions get redirected.
+Livepatching can be described by four basic operations:
+loading, enabling, disabling, removing.
 
-Second, it might take some time until the entire system is migrated with
-the hybrid consistency model being used. The patch revert might block
-the livepatch module removal for too long. Therefore it is useful to
-revert the patch using a separate operation that might be called
-explicitly. But it does not make sense to remove all information until
-the livepatch module is really removed.
 
+5.1. Loading
+------------
 
-5.1. Registration
------------------
+The only reasonable way is to enable the patch when the livepatch kernel
+module is being loaded. For this, klp_enable_patch() has to be called
+in the module_init() callback. There are two main reasons:
 
-Each patch first has to be registered using klp_register_patch(). This makes
-the patch known to the livepatch framework. Also it does some preliminary
-computing and checks.
+First, only the module has an easy access to the related struct klp_patch.
 
-In particular, the patch is added into the list of known patches. The
-addresses of the patched functions are found according to their names.
-The special relocations, mentioned in the section "New functions", are
-applied. The relevant entries are created under
-/sys/kernel/livepatch/<name>. The patch is rejected when any operation
-fails.
+Second, the error code might be used to refuse loading the module when
+the patch cannot get enabled.
 
 
 5.2. Enabling
 -------------
 
-Registered patches might be enabled either by calling klp_enable_patch() or
-by writing '1' to /sys/kernel/livepatch/<name>/enabled. The system will
-start using the new implementation of the patched functions at this stage.
+The livepatch gets enabled by calling klp_enable_patch() from
+the module_init() callback. The system will start using the new
+implementation of the patched functions at this stage.
 
-When a patch is enabled, livepatch enters into a transition state where
-tasks are converging to the patched state.  This is indicated by a value
-of '1' in /sys/kernel/livepatch/<name>/transition.  Once all tasks have
-been patched, the 'transition' value changes to '0'.  For more
-information about this process, see the "Consistency model" section.
+First, the addresses of the patched functions are found according to their
+names. The special relocations, mentioned in the section "New functions",
+are applied. The relevant entries are created under
+/sys/kernel/livepatch/<name>. The patch is rejected when any above
+operation fails.
 
-If an original function is patched for the first time, a function
-specific struct klp_ops is created and an universal ftrace handler is
-registered.
+Second, livepatch enters into a transition state where tasks are converging
+to the patched state. If an original function is patched for the first
+time, a function specific struct klp_ops is created and an universal
+ftrace handler is registered[*]. This stage is indicated by a value of '1'
+in /sys/kernel/livepatch/<name>/transition. For more information about
+this process, see the "Consistency model" section.
 
-Functions might be patched multiple times. The ftrace handler is registered
-only once for the given function. Further patches just add an entry to the
-list (see field `func_stack`) of the struct klp_ops. The last added
-entry is chosen by the ftrace handler and becomes the active function
-replacement.
+Finally, once all tasks have been patched, the 'transition' value changes
+to '0'.
 
-Note that the patches might be enabled in a different order than they were
-registered.
+[*] Note that functions might be patched multiple times. The ftrace handler
+    is registered only once for a given function. Further patches just add
+    an entry to the list (see field `func_stack`) of the struct klp_ops.
+    The right implementation is selected by the ftrace handler, see
+    the "Consistency model" section.
 
 
 5.3. Disabling
 --------------
 
-Enabled patches might get disabled either by calling klp_disable_patch() or
-by writing '0' to /sys/kernel/livepatch/<name>/enabled. At this stage
-either the code from the previously enabled patch or even the original
-code gets used.
+Enabled patches might get disabled by writing '0' to
+/sys/kernel/livepatch/<name>/enabled.
 
-When a patch is disabled, livepatch enters into a transition state where
-tasks are converging to the unpatched state.  This is indicated by a
-value of '1' in /sys/kernel/livepatch/<name>/transition.  Once all tasks
-have been unpatched, the 'transition' value changes to '0'.  For more
-information about this process, see the "Consistency model" section.
+First, livepatch enters into a transition state where tasks are converging
+to the unpatched state. The system starts using either the code from
+the previously enabled patch or even the original one. This stage is
+indicated by a value of '1' in /sys/kernel/livepatch/<name>/transition.
+For more information about this process, see the "Consistency model"
+section.
 
-Here all the functions (struct klp_func) associated with the to-be-disabled
+Second, once all tasks have been unpatched, the 'transition' value changes
+to '0'. All the functions (struct klp_func) associated with the to-be-disabled
 patch are removed from the corresponding struct klp_ops. The ftrace handler
 is unregistered and the struct klp_ops is freed when the func_stack list
 becomes empty.
 
-Patches must be disabled in exactly the reverse order in which they were
-enabled. It makes the problem and the implementation much easier.
+Third, the sysfs interface is destroyed.
 
+Note that patches must be disabled in exactly the reverse order in which
+they were enabled. It makes the problem and the implementation much easier.
 
-5.4. Unregistration
--------------------
 
-Disabled patches might be unregistered by calling klp_unregister_patch().
-This can be done only when the patch is disabled and the code is no longer
-used. It must be called before the livepatch module gets unloaded.
+5.4. Removing
+-------------
 
-At this stage, all the relevant sys-fs entries are removed and the patch
-is removed from the list of known patches.
+Module removal is only safe when there are no users of functions provided
+by the module. This is the reason why the force feature permanently
+disables the removal. Only when the system is successfully transitioned
+to a new patch state (patched/unpatched) without being forced it is
+guaranteed that no task sleeps or runs in the old code.
 
 
 6. Sysfs
diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 6a9165d9b090..8f9c19c69744 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -139,11 +139,12 @@ struct klp_object {
  * struct klp_patch - patch structure for live patching
  * @mod:	reference to the live patch module
  * @objs:	object entries for kernel objects to be patched
- * @list:	list node for global list of registered patches
+ * @list:	list node for global list of actively used patches
  * @kobj:	kobject for sysfs resources
  * @kobj_added: @kobj has been added and needs freeing
  * @enabled:	the patch is enabled (but operation may be incomplete)
  * @forced:	was involved in a forced transition
+ * @free_work:	patch cleanup from workqueue-context
  * @finish:	for waiting till it is safe to remove the patch module
  */
 struct klp_patch {
@@ -157,6 +158,7 @@ struct klp_patch {
 	bool kobj_added;
 	bool enabled;
 	bool forced;
+	struct work_struct free_work;
 	struct completion finish;
 };
 
@@ -168,10 +170,7 @@ struct klp_patch {
 	     func->old_name || func->new_func || func->old_sympos; \
 	     func++)
 
-int klp_register_patch(struct klp_patch *);
-int klp_unregister_patch(struct klp_patch *);
 int klp_enable_patch(struct klp_patch *);
-int klp_disable_patch(struct klp_patch *);
 
 void arch_klp_init_object_loaded(struct klp_patch *patch,
 				 struct klp_object *obj);
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index e77c5017ae0c..bd41b03a72d5 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -45,7 +45,11 @@
  */
 DEFINE_MUTEX(klp_mutex);
 
-/* Registered patches */
+/*
+ * Actively used patches: enabled or in transition. Note that replaced
+ * or disabled patches are not listed even though the related kernel
+ * module still can be loaded.
+ */
 LIST_HEAD(klp_patches);
 
 static struct kobject *klp_root_kobj;
@@ -83,17 +87,6 @@ static void klp_find_object_module(struct klp_object *obj)
 	mutex_unlock(&module_mutex);
 }
 
-static bool klp_is_patch_registered(struct klp_patch *patch)
-{
-	struct klp_patch *mypatch;
-
-	list_for_each_entry(mypatch, &klp_patches, list)
-		if (mypatch == patch)
-			return true;
-
-	return false;
-}
-
 static bool klp_initialized(void)
 {
 	return !!klp_root_kobj;
@@ -292,7 +285,6 @@ static int klp_write_object_relocations(struct module *pmod,
  * /sys/kernel/livepatch/<patch>/<object>/<function,sympos>
  */
 static int __klp_disable_patch(struct klp_patch *patch);
-static int __klp_enable_patch(struct klp_patch *patch);
 
 static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
 			     const char *buf, size_t count)
@@ -309,40 +301,32 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
 
 	mutex_lock(&klp_mutex);
 
-	if (!klp_is_patch_registered(patch)) {
-		/*
-		 * Module with the patch could either disappear meanwhile or is
-		 * not properly initialized yet.
-		 */
-		ret = -EINVAL;
-		goto err;
-	}
-
 	if (patch->enabled == enabled) {
 		/* already in requested state */
 		ret = -EINVAL;
-		goto err;
+		goto out;
 	}
 
-	if (patch == klp_transition_patch) {
+	/*
+	 * Allow to reverse a pending transition in both ways. It might be
+	 * necessary to complete the transition without forcing and breaking
+	 * the system integrity.
+	 *
+	 * Do not allow to re-enable a disabled patch.
+	 */
+	if (patch == klp_transition_patch)
 		klp_reverse_transition();
-	} else if (enabled) {
-		ret = __klp_enable_patch(patch);
-		if (ret)
-			goto err;
-	} else {
+	else if (!enabled)
 		ret = __klp_disable_patch(patch);
-		if (ret)
-			goto err;
-	}
+	else
+		ret = -EINVAL;
 
+out:
 	mutex_unlock(&klp_mutex);
 
+	if (ret)
+		return ret;
 	return count;
-
-err:
-	mutex_unlock(&klp_mutex);
-	return ret;
 }
 
 static ssize_t enabled_show(struct kobject *kobj,
@@ -508,7 +492,7 @@ static void klp_free_objects(struct klp_patch *patch)
  * The operation must be completed by calling klp_free_patch_finish()
  * outside klp_mutex.
  */
-static void klp_free_patch_start(struct klp_patch *patch)
+void klp_free_patch_start(struct klp_patch *patch)
 {
 	if (!list_empty(&patch->list))
 		list_del(&patch->list);
@@ -536,6 +520,23 @@ static void klp_free_patch_finish(struct klp_patch *patch)
 		kobject_put(&patch->kobj);
 		wait_for_completion(&patch->finish);
 	}
+
+	/* Put the module after the last access to struct klp_patch. */
+	if (!patch->forced)
+		module_put(patch->mod);
+}
+
+/*
+ * The livepatch might be freed from sysfs interface created by the patch.
+ * This work allows to wait until the interface is destroyed in a separate
+ * context.
+ */
+static void klp_free_patch_work_fn(struct work_struct *work)
+{
+	struct klp_patch *patch =
+		container_of(work, struct klp_patch, free_work);
+
+	klp_free_patch_finish(patch);
 }
 
 static int klp_init_func(struct klp_object *obj, struct klp_func *func)
@@ -661,6 +662,7 @@ static int klp_init_patch_early(struct klp_patch *patch)
 	patch->kobj_added = false;
 	patch->enabled = false;
 	patch->forced = false;
+	INIT_WORK(&patch->free_work, klp_free_patch_work_fn);
 	init_completion(&patch->finish);
 
 	klp_for_each_object(patch, obj) {
@@ -673,6 +675,9 @@ static int klp_init_patch_early(struct klp_patch *patch)
 			func->kobj_added = false;
 	}
 
+	if (!try_module_get(patch->mod))
+		return -ENODEV;
+
 	return 0;
 }
 
@@ -681,115 +686,22 @@ static int klp_init_patch(struct klp_patch *patch)
 	struct klp_object *obj;
 	int ret;
 
-	mutex_lock(&klp_mutex);
-
-	ret = klp_init_patch_early(patch);
-	if (ret) {
-		mutex_unlock(&klp_mutex);
-		return ret;
-	}
-
 	ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
 				   klp_root_kobj, "%s", patch->mod->name);
-	if (ret) {
-		mutex_unlock(&klp_mutex);
+	if (ret)
 		return ret;
-	}
 	patch->kobj_added = true;
 
 	klp_for_each_object(patch, obj) {
 		ret = klp_init_object(patch, obj);
 		if (ret)
-			goto free;
+			return ret;
 	}
 
 	list_add_tail(&patch->list, &klp_patches);
 
-	mutex_unlock(&klp_mutex);
-
-	return 0;
-
-free:
-	klp_free_patch_start(patch);
-
-	mutex_unlock(&klp_mutex);
-
-	klp_free_patch_finish(patch);
-
-	return ret;
-}
-
-/**
- * klp_unregister_patch() - unregisters a patch
- * @patch:	Disabled patch to be unregistered
- *
- * Frees the data structures and removes the sysfs interface.
- *
- * Return: 0 on success, otherwise error
- */
-int klp_unregister_patch(struct klp_patch *patch)
-{
-	int ret;
-
-	mutex_lock(&klp_mutex);
-
-	if (!klp_is_patch_registered(patch)) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	if (patch->enabled) {
-		ret = -EBUSY;
-		goto err;
-	}
-
-	klp_free_patch_start(patch);
-
-	mutex_unlock(&klp_mutex);
-
-	klp_free_patch_finish(patch);
-
 	return 0;
-err:
-	mutex_unlock(&klp_mutex);
-	return ret;
-}
-EXPORT_SYMBOL_GPL(klp_unregister_patch);
-
-/**
- * klp_register_patch() - registers a patch
- * @patch:	Patch to be registered
- *
- * Initializes the data structure associated with the patch and
- * creates the sysfs interface.
- *
- * There is no need to take the reference on the patch module here. It is done
- * later when the patch is enabled.
- *
- * Return: 0 on success, otherwise error
- */
-int klp_register_patch(struct klp_patch *patch)
-{
-	if (!patch || !patch->mod)
-		return -EINVAL;
-
-	if (!is_livepatch_module(patch->mod)) {
-		pr_err("module %s is not marked as a livepatch module\n",
-		       patch->mod->name);
-		return -EINVAL;
-	}
-
-	if (!klp_initialized())
-		return -ENODEV;
-
-	if (!klp_have_reliable_stack()) {
-		pr_err("This architecture doesn't have support for the livepatch consistency model.\n");
-		return -ENOSYS;
-	}
-
-	return klp_init_patch(patch);
 }
-EXPORT_SYMBOL_GPL(klp_register_patch);
 
 static int __klp_disable_patch(struct klp_patch *patch)
 {
@@ -802,8 +714,7 @@ static int __klp_disable_patch(struct klp_patch *patch)
 		return -EBUSY;
 
 	/* enforce stacking: only the last enabled patch can be disabled */
-	if (!list_is_last(&patch->list, &klp_patches) &&
-	    list_next_entry(patch, list)->enabled)
+	if (!list_is_last(&patch->list, &klp_patches))
 		return -EBUSY;
 
 	klp_init_transition(patch, KLP_UNPATCHED);
@@ -822,44 +733,12 @@ static int __klp_disable_patch(struct klp_patch *patch)
 	smp_wmb();
 
 	klp_start_transition();
-	klp_try_complete_transition();
 	patch->enabled = false;
+	klp_try_complete_transition();
 
 	return 0;
 }
 
-/**
- * klp_disable_patch() - disables a registered patch
- * @patch:	The registered, enabled patch to be disabled
- *
- * Unregisters the patched functions from ftrace.
- *
- * Return: 0 on success, otherwise error
- */
-int klp_disable_patch(struct klp_patch *patch)
-{
-	int ret;
-
-	mutex_lock(&klp_mutex);
-
-	if (!klp_is_patch_registered(patch)) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	if (!patch->enabled) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	ret = __klp_disable_patch(patch);
-
-err:
-	mutex_unlock(&klp_mutex);
-	return ret;
-}
-EXPORT_SYMBOL_GPL(klp_disable_patch);
-
 static int __klp_enable_patch(struct klp_patch *patch)
 {
 	struct klp_object *obj;
@@ -871,17 +750,8 @@ static int __klp_enable_patch(struct klp_patch *patch)
 	if (WARN_ON(patch->enabled))
 		return -EINVAL;
 
-	/* enforce stacking: only the first disabled patch can be enabled */
-	if (patch->list.prev != &klp_patches &&
-	    !list_prev_entry(patch, list)->enabled)
-		return -EBUSY;
-
-	/*
-	 * A reference is taken on the patch module to prevent it from being
-	 * unloaded.
-	 */
-	if (!try_module_get(patch->mod))
-		return -ENODEV;
+	if (!patch->kobj_added)
+		return -EINVAL;
 
 	pr_notice("enabling patch '%s'\n", patch->mod->name);
 
@@ -916,8 +786,8 @@ static int __klp_enable_patch(struct klp_patch *patch)
 	}
 
 	klp_start_transition();
-	klp_try_complete_transition();
 	patch->enabled = true;
+	klp_try_complete_transition();
 
 	return 0;
 err:
@@ -928,11 +798,15 @@ static int __klp_enable_patch(struct klp_patch *patch)
 }
 
 /**
- * klp_enable_patch() - enables a registered patch
- * @patch:	The registered, disabled patch to be enabled
+ * klp_enable_patch() - enable the livepatch
+ * @patch:	patch to be enabled
  *
- * Performs the needed symbol lookups and code relocations,
- * then registers the patched functions with ftrace.
+ * Initializes the data structure associated with the patch, creates the sysfs
+ * interface, performs the needed symbol lookups and code relocations,
+ * registers the patched functions with ftrace.
+ *
+ * This function is supposed to be called from the livepatch module_init()
+ * callback.
  *
  * Return: 0 on success, otherwise error
  */
@@ -940,17 +814,51 @@ int klp_enable_patch(struct klp_patch *patch)
 {
 	int ret;
 
+	if (!patch || !patch->mod)
+		return -EINVAL;
+
+	if (!is_livepatch_module(patch->mod)) {
+		pr_err("module %s is not marked as a livepatch module\n",
+		       patch->mod->name);
+		return -EINVAL;
+	}
+
+	if (!klp_initialized())
+		return -ENODEV;
+
+	if (!klp_have_reliable_stack()) {
+		pr_err("This architecture doesn't have support for the livepatch consistency model.\n");
+		return -ENOSYS;
+	}
+
+
 	mutex_lock(&klp_mutex);
 
-	if (!klp_is_patch_registered(patch)) {
-		ret = -EINVAL;
-		goto err;
+	ret = klp_init_patch_early(patch);
+	if (ret) {
+		mutex_unlock(&klp_mutex);
+		return ret;
 	}
 
+	ret = klp_init_patch(patch);
+	if (ret)
+		goto err;
+
 	ret = __klp_enable_patch(patch);
+	if (ret)
+		goto err;
+
+	mutex_unlock(&klp_mutex);
+
+	return 0;
 
 err:
+	klp_free_patch_start(patch);
+
 	mutex_unlock(&klp_mutex);
+
+	klp_free_patch_finish(patch);
+
 	return ret;
 }
 EXPORT_SYMBOL_GPL(klp_enable_patch);
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index d0cb5390e247..d4eefc520c08 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -7,6 +7,8 @@
 extern struct mutex klp_mutex;
 extern struct list_head klp_patches;
 
+void klp_free_patch_start(struct klp_patch *patch);
+
 static inline bool klp_is_object_loaded(struct klp_object *obj)
 {
 	return !obj->name || obj->mod;
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index a4c921364003..c9917a24b3a4 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -134,13 +134,6 @@ static void klp_complete_transition(void)
 	pr_notice("'%s': %s complete\n", klp_transition_patch->mod->name,
 		  klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
 
-	/*
-	 * patch->forced set implies unbounded increase of module's ref count if
-	 * the module is disabled/enabled in a loop.
-	 */
-	if (!klp_transition_patch->forced && klp_target_state == KLP_UNPATCHED)
-		module_put(klp_transition_patch->mod);
-
 	klp_target_state = KLP_UNDEFINED;
 	klp_transition_patch = NULL;
 }
@@ -357,6 +350,7 @@ void klp_try_complete_transition(void)
 {
 	unsigned int cpu;
 	struct task_struct *g, *task;
+	struct klp_patch *patch;
 	bool complete = true;
 
 	WARN_ON_ONCE(klp_target_state == KLP_UNDEFINED);
@@ -405,7 +399,18 @@ void klp_try_complete_transition(void)
 	}
 
 	/* we're done, now cleanup the data structures */
+	patch = klp_transition_patch;
 	klp_complete_transition();
+
+	/*
+	 * It would make more sense to free the patch in
+	 * klp_complete_transition() but it is called also
+	 * from klp_cancel_transition().
+	 */
+	if (!patch->enabled) {
+		klp_free_patch_start(patch);
+		schedule_work(&patch->free_work);
+	}
 }
 
 /*
diff --git a/samples/livepatch/livepatch-callbacks-demo.c b/samples/livepatch/livepatch-callbacks-demo.c
index 72f9e6d1387b..62d97953ad02 100644
--- a/samples/livepatch/livepatch-callbacks-demo.c
+++ b/samples/livepatch/livepatch-callbacks-demo.c
@@ -195,22 +195,11 @@ static struct klp_patch patch = {
 
 static int livepatch_callbacks_demo_init(void)
 {
-	int ret;
-
-	ret = klp_register_patch(&patch);
-	if (ret)
-		return ret;
-	ret = klp_enable_patch(&patch);
-	if (ret) {
-		WARN_ON(klp_unregister_patch(&patch));
-		return ret;
-	}
-	return 0;
+	return klp_enable_patch(&patch);
 }
 
 static void livepatch_callbacks_demo_exit(void)
 {
-	WARN_ON(klp_unregister_patch(&patch));
 }
 
 module_init(livepatch_callbacks_demo_init);
diff --git a/samples/livepatch/livepatch-sample.c b/samples/livepatch/livepatch-sample.c
index 2d554dd930e2..01c9cf003ca2 100644
--- a/samples/livepatch/livepatch-sample.c
+++ b/samples/livepatch/livepatch-sample.c
@@ -69,22 +69,11 @@ static struct klp_patch patch = {
 
 static int livepatch_init(void)
 {
-	int ret;
-
-	ret = klp_register_patch(&patch);
-	if (ret)
-		return ret;
-	ret = klp_enable_patch(&patch);
-	if (ret) {
-		WARN_ON(klp_unregister_patch(&patch));
-		return ret;
-	}
-	return 0;
+	return klp_enable_patch(&patch);
 }
 
 static void livepatch_exit(void)
 {
-	WARN_ON(klp_unregister_patch(&patch));
 }
 
 module_init(livepatch_init);
diff --git a/samples/livepatch/livepatch-shadow-fix1.c b/samples/livepatch/livepatch-shadow-fix1.c
index 49b13553eaae..6b505b48a81f 100644
--- a/samples/livepatch/livepatch-shadow-fix1.c
+++ b/samples/livepatch/livepatch-shadow-fix1.c
@@ -152,25 +152,13 @@ static struct klp_patch patch = {
 
 static int livepatch_shadow_fix1_init(void)
 {
-	int ret;
-
-	ret = klp_register_patch(&patch);
-	if (ret)
-		return ret;
-	ret = klp_enable_patch(&patch);
-	if (ret) {
-		WARN_ON(klp_unregister_patch(&patch));
-		return ret;
-	}
-	return 0;
+	return klp_enable_patch(&patch);
 }
 
 static void livepatch_shadow_fix1_exit(void)
 {
 	/* Cleanup any existing SV_LEAK shadow variables */
 	klp_shadow_free_all(SV_LEAK, livepatch_fix1_dummy_leak_dtor);
-
-	WARN_ON(klp_unregister_patch(&patch));
 }
 
 module_init(livepatch_shadow_fix1_init);
diff --git a/samples/livepatch/livepatch-shadow-fix2.c b/samples/livepatch/livepatch-shadow-fix2.c
index b34c7bf83356..52de947b5526 100644
--- a/samples/livepatch/livepatch-shadow-fix2.c
+++ b/samples/livepatch/livepatch-shadow-fix2.c
@@ -129,25 +129,13 @@ static struct klp_patch patch = {
 
 static int livepatch_shadow_fix2_init(void)
 {
-	int ret;
-
-	ret = klp_register_patch(&patch);
-	if (ret)
-		return ret;
-	ret = klp_enable_patch(&patch);
-	if (ret) {
-		WARN_ON(klp_unregister_patch(&patch));
-		return ret;
-	}
-	return 0;
+	return klp_enable_patch(&patch);
 }
 
 static void livepatch_shadow_fix2_exit(void)
 {
 	/* Cleanup any existing SV_COUNTER shadow variables */
 	klp_shadow_free_all(SV_COUNTER, NULL);
-
-	WARN_ON(klp_unregister_patch(&patch));
 }
 
 module_init(livepatch_shadow_fix2_init);
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v15 06/11] livepatch: Use lists to manage patches, objects and functions
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
                   ` (4 preceding siblings ...)
  2019-01-09 12:43 ` [PATCH v15 05/11] livepatch: Simplify API by removing registration step Petr Mladek
@ 2019-01-09 12:43 ` Petr Mladek
  2019-01-09 12:43 ` [PATCH v15 07/11] livepatch: Add atomic replace Petr Mladek
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Petr Mladek @ 2019-01-09 12:43 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Evgenii Shatokhin, live-patching,
	linux-kernel, Petr Mladek

From: Jason Baron <jbaron@akamai.com>

Currently klp_patch contains a pointer to a statically allocated array of
struct klp_object and struct klp_objects contains a pointer to a statically
allocated array of klp_func. In order to allow for the dynamic allocation
of objects and functions, link klp_patch, klp_object, and klp_func together
via linked lists. This allows us to more easily allocate new objects and
functions, while having the iterator be a simple linked list walk.

The static structures are added to the lists early. It allows to add
the dynamically allocated objects before klp_init_object() and
klp_init_func() calls. Therefore it reduces the further changes
to the code.

This patch does not change the existing behavior.

Signed-off-by: Jason Baron <jbaron@akamai.com>
[pmladek@suse.com: Initialize lists before init calls]
Signed-off-by: Petr Mladek <pmladek@suse.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jiri Kosina <jikos@kernel.org>
---
 include/linux/livepatch.h | 19 +++++++++++++++++--
 kernel/livepatch/core.c   |  9 +++++++--
 2 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 8f9c19c69744..e117e20ff771 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -24,6 +24,7 @@
 #include <linux/module.h>
 #include <linux/ftrace.h>
 #include <linux/completion.h>
+#include <linux/list.h>
 
 #if IS_ENABLED(CONFIG_LIVEPATCH)
 
@@ -42,6 +43,7 @@
  *		can be found (optional)
  * @old_func:	pointer to the function being patched
  * @kobj:	kobject for sysfs resources
+ * @node:	list node for klp_object func_list
  * @stack_node:	list node for klp_ops func_stack list
  * @old_size:	size of the old function
  * @new_size:	size of the new function
@@ -80,6 +82,7 @@ struct klp_func {
 	/* internal */
 	void *old_func;
 	struct kobject kobj;
+	struct list_head node;
 	struct list_head stack_node;
 	unsigned long old_size, new_size;
 	bool kobj_added;
@@ -117,6 +120,8 @@ struct klp_callbacks {
  * @funcs:	function entries for functions to be patched in the object
  * @callbacks:	functions to be executed pre/post (un)patching
  * @kobj:	kobject for sysfs resources
+ * @func_list:	dynamic list of the function entries
+ * @node:	list node for klp_patch obj_list
  * @mod:	kernel module associated with the patched object
  *		(NULL for vmlinux)
  * @kobj_added: @kobj has been added and needs freeing
@@ -130,6 +135,8 @@ struct klp_object {
 
 	/* internal */
 	struct kobject kobj;
+	struct list_head func_list;
+	struct list_head node;
 	struct module *mod;
 	bool kobj_added;
 	bool patched;
@@ -141,6 +148,7 @@ struct klp_object {
  * @objs:	object entries for kernel objects to be patched
  * @list:	list node for global list of actively used patches
  * @kobj:	kobject for sysfs resources
+ * @obj_list:	dynamic list of the object entries
  * @kobj_added: @kobj has been added and needs freeing
  * @enabled:	the patch is enabled (but operation may be incomplete)
  * @forced:	was involved in a forced transition
@@ -155,6 +163,7 @@ struct klp_patch {
 	/* internal */
 	struct list_head list;
 	struct kobject kobj;
+	struct list_head obj_list;
 	bool kobj_added;
 	bool enabled;
 	bool forced;
@@ -162,14 +171,20 @@ struct klp_patch {
 	struct completion finish;
 };
 
-#define klp_for_each_object(patch, obj) \
+#define klp_for_each_object_static(patch, obj) \
 	for (obj = patch->objs; obj->funcs || obj->name; obj++)
 
-#define klp_for_each_func(obj, func) \
+#define klp_for_each_object(patch, obj)	\
+	list_for_each_entry(obj, &patch->obj_list, node)
+
+#define klp_for_each_func_static(obj, func) \
 	for (func = obj->funcs; \
 	     func->old_name || func->new_func || func->old_sympos; \
 	     func++)
 
+#define klp_for_each_func(obj, func)	\
+	list_for_each_entry(func, &obj->func_list, node)
+
 int klp_enable_patch(struct klp_patch *);
 
 void arch_klp_init_object_loaded(struct klp_patch *patch,
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index bd41b03a72d5..37d0d3645fa6 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -659,20 +659,25 @@ static int klp_init_patch_early(struct klp_patch *patch)
 		return -EINVAL;
 
 	INIT_LIST_HEAD(&patch->list);
+	INIT_LIST_HEAD(&patch->obj_list);
 	patch->kobj_added = false;
 	patch->enabled = false;
 	patch->forced = false;
 	INIT_WORK(&patch->free_work, klp_free_patch_work_fn);
 	init_completion(&patch->finish);
 
-	klp_for_each_object(patch, obj) {
+	klp_for_each_object_static(patch, obj) {
 		if (!obj->funcs)
 			return -EINVAL;
 
+		INIT_LIST_HEAD(&obj->func_list);
 		obj->kobj_added = false;
+		list_add_tail(&obj->node, &patch->obj_list);
 
-		klp_for_each_func(obj, func)
+		klp_for_each_func_static(obj, func) {
 			func->kobj_added = false;
+			list_add_tail(&func->node, &obj->func_list);
+		}
 	}
 
 	if (!try_module_get(patch->mod))
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v15 07/11] livepatch: Add atomic replace
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
                   ` (5 preceding siblings ...)
  2019-01-09 12:43 ` [PATCH v15 06/11] livepatch: Use lists to manage patches, objects and functions Petr Mladek
@ 2019-01-09 12:43 ` Petr Mladek
  2019-01-10 13:05   ` Miroslav Benes
  2019-01-09 12:43 ` [PATCH v15 08/11] livepatch: Remove Nop structures when unused Petr Mladek
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 20+ messages in thread
From: Petr Mladek @ 2019-01-09 12:43 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Evgenii Shatokhin, live-patching,
	linux-kernel, Petr Mladek, Jessica Yu

From: Jason Baron <jbaron@akamai.com>

Sometimes we would like to revert a particular fix. Currently, this
is not easy because we want to keep all other fixes active and we
could revert only the last applied patch.

One solution would be to apply new patch that implemented all
the reverted functions like in the original code. It would work
as expected but there will be unnecessary redirections. In addition,
it would also require knowing which functions need to be reverted at
build time.

Another problem is when there are many patches that touch the same
functions. There might be dependencies between patches that are
not enforced on the kernel side. Also it might be pretty hard to
actually prepare the patch and ensure compatibility with the other
patches.

Atomic replace && cumulative patches:

A better solution would be to create cumulative patch and say that
it replaces all older ones.

This patch adds a new "replace" flag to struct klp_patch. When it is
enabled, a set of 'nop' klp_func will be dynamically created for all
functions that are already being patched but that will no longer be
modified by the new patch. They are used as a new target during
the patch transition.

The idea is to handle Nops' structures like the static ones. When
the dynamic structures are allocated, we initialize all values that
are normally statically defined.

The only exception is "new_func" in struct klp_func. It has to point
to the original function and the address is known only when the object
(module) is loaded. Note that we really need to set it. The address is
used, for example, in klp_check_stack_func().

Nevertheless we still need to distinguish the dynamically allocated
structures in some operations. For this, we add "nop" flag into
struct klp_func and "dynamic" flag into struct klp_object. They
need special handling in the following situations:

  + The structures are added into the lists of objects and functions
    immediately. In fact, the lists were created for this purpose.

  + The address of the original function is known only when the patched
    object (module) is loaded. Therefore it is copied later in
    klp_init_object_loaded().

  + The ftrace handler must not set PC to func->new_func. It would cause
    infinite loop because the address points back to the beginning of
    the original function.

  + The various free() functions must free the structure itself.

Note that other ways to detect the dynamic structures are not considered
safe. For example, even the statically defined struct klp_object might
include empty funcs array. It might be there just to run some callbacks.

Also note that the safe iterator must be used in the free() functions.
Otherwise already freed structures might get accessed.

Special callbacks handling:

The callbacks from the replaced patches are _not_ called by intention.
It would be pretty hard to define a reasonable semantic and implement it.

It might even be counter-productive. The new patch is cumulative. It is
supposed to include most of the changes from older patches. In most cases,
it will not want to call pre_unpatch() post_unpatch() callbacks from
the replaced patches. It would disable/break things for no good reasons.
Also it should be easier to handle various scenarios in a single script
in the new patch than think about interactions caused by running many
scripts from older patches. Not to say that the old scripts even would
not expect to be called in this situation.

Removing replaced patches:

One nice effect of the cumulative patches is that the code from the
older patches is no longer used. Therefore the replaced patches can
be removed. It has several advantages:

  + Nops' structs will no longer be necessary and might be removed.
    This would save memory, restore performance (no ftrace handler),
    allow clear view on what is really patched.

  + Disabling the patch will cause using the original code everywhere.
    Therefore the livepatch callbacks could handle only one scenario.
    Note that the complication is already complex enough when the patch
    gets enabled. It is currently solved by calling callbacks only from
    the new cumulative patch.

  + The state is clean in both the sysfs interface and lsmod. The modules
    with the replaced livepatches might even get removed from the system.

Some people actually expected this behavior from the beginning. After all
a cumulative patch is supposed to "completely" replace an existing one.
It is like when a new version of an application replaces an older one.

This patch does the first step. It removes the replaced patches from
the list of patches. It is safe. The consistency model ensures that
they are no longer used. By other words, each process works only with
the structures from klp_transition_patch.

The removal is done by a special function. It combines actions done by
__disable_patch() and klp_complete_transition(). But it is a fast
track without all the transaction-related stuff.

Signed-off-by: Jason Baron <jbaron@akamai.com>
[pmladek@suse.com: Split, reuse existing code, simplified]
Signed-off-by: Petr Mladek <pmladek@suse.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Miroslav Benes <mbenes@suse.cz>
---
 Documentation/livepatch/livepatch.txt |  31 ++++-
 include/linux/livepatch.h             |  12 ++
 kernel/livepatch/core.c               | 232 ++++++++++++++++++++++++++++++++--
 kernel/livepatch/core.h               |   1 +
 kernel/livepatch/patch.c              |   8 ++
 kernel/livepatch/transition.c         |   3 +
 6 files changed, 273 insertions(+), 14 deletions(-)

diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt
index 8f56490a4bb6..2a70f43166f6 100644
--- a/Documentation/livepatch/livepatch.txt
+++ b/Documentation/livepatch/livepatch.txt
@@ -15,8 +15,9 @@ Table of Contents:
 5. Livepatch life-cycle
    5.1. Loading
    5.2. Enabling
-   5.3. Disabling
-   5.4. Removing
+   5.3. Replacing
+   5.4. Disabling
+   5.5. Removing
 6. Sysfs
 7. Limitations
 
@@ -300,8 +301,12 @@ into three levels:
 5. Livepatch life-cycle
 =======================
 
-Livepatching can be described by four basic operations:
-loading, enabling, disabling, removing.
+Livepatching can be described by five basic operations:
+loading, enabling, replacing, disabling, removing.
+
+Where the replacing and the disabling operations are mutually
+exclusive. They have the same result for the given patch but
+not for the system.
 
 
 5.1. Loading
@@ -347,7 +352,21 @@ to '0'.
     the "Consistency model" section.
 
 
-5.3. Disabling
+5.3. Replacing
+--------------
+
+All enabled patches might get replaced by a cumulative patch that
+has the .replace flag set.
+
+Once the new patch is enabled and the 'transition' finishes then
+all the functions (struct klp_func) associated with the replaced
+patches are removed from the corresponding struct klp_ops. Also
+the ftrace handler is unregistered and the struct klp_ops is
+freed when the related function is not modified by the new patch
+and func_stack list becomes empty.
+
+
+5.4. Disabling
 --------------
 
 Enabled patches might get disabled by writing '0' to
@@ -372,7 +391,7 @@ Note that patches must be disabled in exactly the reverse order in which
 they were enabled. It makes the problem and the implementation much easier.
 
 
-5.4. Removing
+5.5. Removing
 -------------
 
 Module removal is only safe when there are no users of functions provided
diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index e117e20ff771..53551f470722 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -48,6 +48,7 @@
  * @old_size:	size of the old function
  * @new_size:	size of the new function
  * @kobj_added: @kobj has been added and needs freeing
+ * @nop:        temporary patch to use the original code again; dyn. allocated
  * @patched:	the func has been added to the klp_ops list
  * @transition:	the func is currently being applied or reverted
  *
@@ -86,6 +87,7 @@ struct klp_func {
 	struct list_head stack_node;
 	unsigned long old_size, new_size;
 	bool kobj_added;
+	bool nop;
 	bool patched;
 	bool transition;
 };
@@ -125,6 +127,7 @@ struct klp_callbacks {
  * @mod:	kernel module associated with the patched object
  *		(NULL for vmlinux)
  * @kobj_added: @kobj has been added and needs freeing
+ * @dynamic:    temporary object for nop functions; dynamically allocated
  * @patched:	the object's funcs have been added to the klp_ops list
  */
 struct klp_object {
@@ -139,6 +142,7 @@ struct klp_object {
 	struct list_head node;
 	struct module *mod;
 	bool kobj_added;
+	bool dynamic;
 	bool patched;
 };
 
@@ -146,6 +150,7 @@ struct klp_object {
  * struct klp_patch - patch structure for live patching
  * @mod:	reference to the live patch module
  * @objs:	object entries for kernel objects to be patched
+ * @replace:	replace all actively used patches
  * @list:	list node for global list of actively used patches
  * @kobj:	kobject for sysfs resources
  * @obj_list:	dynamic list of the object entries
@@ -159,6 +164,7 @@ struct klp_patch {
 	/* external */
 	struct module *mod;
 	struct klp_object *objs;
+	bool replace;
 
 	/* internal */
 	struct list_head list;
@@ -174,6 +180,9 @@ struct klp_patch {
 #define klp_for_each_object_static(patch, obj) \
 	for (obj = patch->objs; obj->funcs || obj->name; obj++)
 
+#define klp_for_each_object_safe(patch, obj, tmp_obj)		\
+	list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, node)
+
 #define klp_for_each_object(patch, obj)	\
 	list_for_each_entry(obj, &patch->obj_list, node)
 
@@ -182,6 +191,9 @@ struct klp_patch {
 	     func->old_name || func->new_func || func->old_sympos; \
 	     func++)
 
+#define klp_for_each_func_safe(obj, func, tmp_func)			\
+	list_for_each_entry_safe(func, tmp_func, &obj->func_list, node)
+
 #define klp_for_each_func(obj, func)	\
 	list_for_each_entry(func, &obj->func_list, node)
 
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 37d0d3645fa6..ecb7660f1d8b 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -92,6 +92,40 @@ static bool klp_initialized(void)
 	return !!klp_root_kobj;
 }
 
+static struct klp_func *klp_find_func(struct klp_object *obj,
+				      struct klp_func *old_func)
+{
+	struct klp_func *func;
+
+	klp_for_each_func(obj, func) {
+		if ((strcmp(old_func->old_name, func->old_name) == 0) &&
+		    (old_func->old_sympos == func->old_sympos)) {
+			return func;
+		}
+	}
+
+	return NULL;
+}
+
+static struct klp_object *klp_find_object(struct klp_patch *patch,
+					  struct klp_object *old_obj)
+{
+	struct klp_object *obj;
+
+	klp_for_each_object(patch, obj) {
+		if (klp_is_module(old_obj)) {
+			if (klp_is_module(obj) &&
+			    strcmp(old_obj->name, obj->name) == 0) {
+				return obj;
+			}
+		} else if (!klp_is_module(obj)) {
+			return obj;
+		}
+	}
+
+	return NULL;
+}
+
 struct klp_find_arg {
 	const char *objname;
 	const char *name;
@@ -418,6 +452,121 @@ static struct attribute *klp_patch_attrs[] = {
 	NULL
 };
 
+static void klp_free_object_dynamic(struct klp_object *obj)
+{
+	kfree(obj->name);
+	kfree(obj);
+}
+
+static struct klp_object *klp_alloc_object_dynamic(const char *name)
+{
+	struct klp_object *obj;
+
+	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+	if (!obj)
+		return NULL;
+
+	if (name) {
+		obj->name = kstrdup(name, GFP_KERNEL);
+		if (!obj->name) {
+			kfree(obj);
+			return NULL;
+		}
+	}
+
+	INIT_LIST_HEAD(&obj->func_list);
+	obj->dynamic = true;
+
+	return obj;
+}
+
+static void klp_free_func_nop(struct klp_func *func)
+{
+	kfree(func->old_name);
+	kfree(func);
+}
+
+static struct klp_func *klp_alloc_func_nop(struct klp_func *old_func,
+					   struct klp_object *obj)
+{
+	struct klp_func *func;
+
+	func = kzalloc(sizeof(*func), GFP_KERNEL);
+	if (!func)
+		return NULL;
+
+	if (old_func->old_name) {
+		func->old_name = kstrdup(old_func->old_name, GFP_KERNEL);
+		if (!func->old_name) {
+			kfree(func);
+			return NULL;
+		}
+	}
+
+	/*
+	 * func->new_func is same as func->old_func. These addresses are
+	 * set when the object is loaded, see klp_init_object_loaded().
+	 */
+	func->old_sympos = old_func->old_sympos;
+	func->nop = true;
+
+	return func;
+}
+
+static int klp_add_object_nops(struct klp_patch *patch,
+			       struct klp_object *old_obj)
+{
+	struct klp_object *obj;
+	struct klp_func *func, *old_func;
+
+	obj = klp_find_object(patch, old_obj);
+
+	if (!obj) {
+		obj = klp_alloc_object_dynamic(old_obj->name);
+		if (!obj)
+			return -ENOMEM;
+
+		list_add_tail(&obj->node, &patch->obj_list);
+	}
+
+	klp_for_each_func(old_obj, old_func) {
+		func = klp_find_func(obj, old_func);
+		if (func)
+			continue;
+
+		func = klp_alloc_func_nop(old_func, obj);
+		if (!func)
+			return -ENOMEM;
+
+		list_add_tail(&func->node, &obj->func_list);
+	}
+
+	return 0;
+}
+
+/*
+ * Add 'nop' functions which simply return to the caller to run
+ * the original function. The 'nop' functions are added to a
+ * patch to facilitate a 'replace' mode.
+ */
+static int klp_add_nops(struct klp_patch *patch)
+{
+	struct klp_patch *old_patch;
+	struct klp_object *old_obj;
+
+	list_for_each_entry(old_patch, &klp_patches, list) {
+		klp_for_each_object(old_patch, old_obj) {
+			int err;
+
+			err = klp_add_object_nops(patch, old_obj);
+			if (err)
+				return err;
+		}
+	}
+
+	return 0;
+}
+
 static void klp_kobj_release_patch(struct kobject *kobj)
 {
 	struct klp_patch *patch;
@@ -434,6 +583,12 @@ static struct kobj_type klp_ktype_patch = {
 
 static void klp_kobj_release_object(struct kobject *kobj)
 {
+	struct klp_object *obj;
+
+	obj = container_of(kobj, struct klp_object, kobj);
+
+	if (obj->dynamic)
+		klp_free_object_dynamic(obj);
 }
 
 static struct kobj_type klp_ktype_object = {
@@ -443,6 +598,12 @@ static struct kobj_type klp_ktype_object = {
 
 static void klp_kobj_release_func(struct kobject *kobj)
 {
+	struct klp_func *func;
+
+	func = container_of(kobj, struct klp_func, kobj);
+
+	if (func->nop)
+		klp_free_func_nop(func);
 }
 
 static struct kobj_type klp_ktype_func = {
@@ -452,12 +613,15 @@ static struct kobj_type klp_ktype_func = {
 
 static void klp_free_funcs(struct klp_object *obj)
 {
-	struct klp_func *func;
+	struct klp_func *func, *tmp_func;
 
-	klp_for_each_func(obj, func) {
+	klp_for_each_func_safe(obj, func, tmp_func) {
 		/* Might be called from klp_init_patch() error path. */
-		if (func->kobj_added)
+		if (func->kobj_added) {
 			kobject_put(&func->kobj);
+		} else if (func->nop) {
+			klp_free_func_nop(func);
+		}
 	}
 }
 
@@ -468,20 +632,27 @@ static void klp_free_object_loaded(struct klp_object *obj)
 
 	obj->mod = NULL;
 
-	klp_for_each_func(obj, func)
+	klp_for_each_func(obj, func) {
 		func->old_func = NULL;
+
+		if (func->nop)
+			func->new_func = NULL;
+	}
 }
 
 static void klp_free_objects(struct klp_patch *patch)
 {
-	struct klp_object *obj;
+	struct klp_object *obj, *tmp_obj;
 
-	klp_for_each_object(patch, obj) {
+	klp_for_each_object_safe(patch, obj, tmp_obj) {
 		klp_free_funcs(obj);
 
 		/* Might be called from klp_init_patch() error path. */
-		if (obj->kobj_added)
+		if (obj->kobj_added) {
 			kobject_put(&obj->kobj);
+		} else if (obj->dynamic) {
+			klp_free_object_dynamic(obj);
+		}
 	}
 }
 
@@ -543,7 +714,14 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
 {
 	int ret;
 
-	if (!func->old_name || !func->new_func)
+	if (!func->old_name)
+		return -EINVAL;
+
+	/*
+	 * NOPs get the address later. The patched module must be loaded,
+	 * see klp_init_object_loaded().
+	 */
+	if (!func->new_func && !func->nop)
 		return -EINVAL;
 
 	if (strlen(func->old_name) >= KSYM_NAME_LEN)
@@ -605,6 +783,9 @@ static int klp_init_object_loaded(struct klp_patch *patch,
 			return -ENOENT;
 		}
 
+		if (func->nop)
+			func->new_func = func->old_func;
+
 		ret = kallsyms_lookup_size_offset((unsigned long)func->new_func,
 						  &func->new_size, NULL);
 		if (!ret) {
@@ -697,6 +878,12 @@ static int klp_init_patch(struct klp_patch *patch)
 		return ret;
 	patch->kobj_added = true;
 
+	if (patch->replace) {
+		ret = klp_add_nops(patch);
+		if (ret)
+			return ret;
+	}
+
 	klp_for_each_object(patch, obj) {
 		ret = klp_init_object(patch, obj);
 		if (ret)
@@ -869,6 +1056,35 @@ int klp_enable_patch(struct klp_patch *patch)
 EXPORT_SYMBOL_GPL(klp_enable_patch);
 
 /*
+ * This function removes replaced patches.
+ *
+ * We could be pretty aggressive here. It is called in the situation where
+ * these structures are no longer accessible. All functions are redirected
+ * by the klp_transition_patch. They use either a new code or they are in
+ * the original code because of the special nop function patches.
+ *
+ * The only exception is when the transition was forced. In this case,
+ * klp_ftrace_handler() might still see the replaced patch on the stack.
+ * Fortunately, it is carefully designed to work with removed functions
+ * thanks to RCU. We only have to keep the patches on the system. Also
+ * this is handled transparently by patch->module_put.
+ */
+void klp_discard_replaced_patches(struct klp_patch *new_patch)
+{
+	struct klp_patch *old_patch, *tmp_patch;
+
+	list_for_each_entry_safe(old_patch, tmp_patch, &klp_patches, list) {
+		if (old_patch == new_patch)
+			return;
+
+		old_patch->enabled = false;
+		klp_unpatch_objects(old_patch);
+		klp_free_patch_start(old_patch);
+		schedule_work(&old_patch->free_work);
+	}
+}
+
+/*
  * Remove parts of patches that touch a given kernel module. The list of
  * patches processed might be limited. When limit is NULL, all patches
  * will be handled.
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index d4eefc520c08..f6a853adcc00 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -8,6 +8,7 @@ extern struct mutex klp_mutex;
 extern struct list_head klp_patches;
 
 void klp_free_patch_start(struct klp_patch *patch);
+void klp_discard_replaced_patches(struct klp_patch *new_patch);
 
 static inline bool klp_is_object_loaded(struct klp_object *obj)
 {
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 825022d70912..0ff466ab4b5a 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -118,7 +118,15 @@ static void notrace klp_ftrace_handler(unsigned long ip,
 		}
 	}
 
+	/*
+	 * NOPs are used to replace existing patches with original code.
+	 * Do nothing! Setting pc would cause an infinite loop.
+	 */
+	if (func->nop)
+		goto unlock;
+
 	klp_arch_set_pc(regs, (unsigned long)func->new_func);
+
 unlock:
 	preempt_enable_notrace();
 }
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index c9917a24b3a4..f4c5908a9731 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -85,6 +85,9 @@ static void klp_complete_transition(void)
 		 klp_transition_patch->mod->name,
 		 klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
 
+	if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED)
+		klp_discard_replaced_patches(klp_transition_patch);
+
 	if (klp_target_state == KLP_UNPATCHED) {
 		/*
 		 * All tasks have transitioned to KLP_UNPATCHED so we can now
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v15 08/11] livepatch: Remove Nop structures when unused
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
                   ` (6 preceding siblings ...)
  2019-01-09 12:43 ` [PATCH v15 07/11] livepatch: Add atomic replace Petr Mladek
@ 2019-01-09 12:43 ` Petr Mladek
  2019-01-10 13:42   ` Miroslav Benes
  2019-01-09 12:43 ` [PATCH v15 09/11] livepatch: Atomic replace and cumulative patches documentation Petr Mladek
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 20+ messages in thread
From: Petr Mladek @ 2019-01-09 12:43 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Evgenii Shatokhin, live-patching,
	linux-kernel, Petr Mladek

Replaced patches are removed from the stack when the transition is
finished. It means that Nop structures will never be needed again
and can be removed. Why should we care?

  + Nop structures give the impression that the function is patched
    even though the ftrace handler has no effect.

  + Ftrace handlers do not come for free. They cause slowdown that might
    be visible in some workloads. The ftrace-related slowdown might
    actually be the reason why the function is no longer patched in
    the new cumulative patch. One would expect that cumulative patch
    would help solve these problems as well.

  + Cumulative patches are supposed to replace any earlier version of
    the patch. The amount of NOPs depends on which version was replaced.
    This multiplies the amount of scenarios that might happen.

    One might say that NOPs are innocent. But there are even optimized
    NOP instructions for different processors, for example, see
    arch/x86/kernel/alternative.c. And klp_ftrace_handler() is much
    more complicated.

  + It sounds natural to clean up a mess that is no longer needed.
    It could only be worse if we do not do it.

This patch allows to unpatch and free the dynamic structures independently
when the transition finishes.

The free part is a bit tricky because kobject free callbacks are called
asynchronously. We could not wait for them easily. Fortunately, we do
not have to. Any further access can be avoided by removing them from
the dynamic lists.

Signed-off-by: Petr Mladek <pmladek@suse.com>
---
 kernel/livepatch/core.c       | 48 ++++++++++++++++++++++++++++++++++++++++---
 kernel/livepatch/core.h       |  1 +
 kernel/livepatch/patch.c      | 31 +++++++++++++++++++++++-----
 kernel/livepatch/patch.h      |  1 +
 kernel/livepatch/transition.c |  4 +++-
 5 files changed, 76 insertions(+), 9 deletions(-)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index ecb7660f1d8b..113645ee86b6 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -611,11 +611,16 @@ static struct kobj_type klp_ktype_func = {
 	.sysfs_ops = &kobj_sysfs_ops,
 };
 
-static void klp_free_funcs(struct klp_object *obj)
+static void __klp_free_funcs(struct klp_object *obj, bool nops_only)
 {
 	struct klp_func *func, *tmp_func;
 
 	klp_for_each_func_safe(obj, func, tmp_func) {
+		if (nops_only && !func->nop)
+			continue;
+
+		list_del(&func->node);
+
 		/* Might be called from klp_init_patch() error path. */
 		if (func->kobj_added) {
 			kobject_put(&func->kobj);
@@ -640,12 +645,17 @@ static void klp_free_object_loaded(struct klp_object *obj)
 	}
 }
 
-static void klp_free_objects(struct klp_patch *patch)
+static void __klp_free_objects(struct klp_patch *patch, bool nops_only)
 {
 	struct klp_object *obj, *tmp_obj;
 
 	klp_for_each_object_safe(patch, obj, tmp_obj) {
-		klp_free_funcs(obj);
+		__klp_free_funcs(obj, nops_only);
+
+		if (nops_only && !obj->dynamic)
+			continue;
+
+		list_del(&obj->node);
 
 		/* Might be called from klp_init_patch() error path. */
 		if (obj->kobj_added) {
@@ -656,6 +666,16 @@ static void klp_free_objects(struct klp_patch *patch)
 	}
 }
 
+static void klp_free_objects(struct klp_patch *patch)
+{
+	__klp_free_objects(patch, false);
+}
+
+static void klp_free_objects_dynamic(struct klp_patch *patch)
+{
+	__klp_free_objects(patch, true);
+}
+
 /*
  * This function implements the free operations that can be called safely
  * under klp_mutex.
@@ -1085,6 +1105,28 @@ void klp_discard_replaced_patches(struct klp_patch *new_patch)
 }
 
 /*
+ * This function removes the dynamically allocated 'nop' functions.
+ *
+ * We could be pretty aggressive. NOPs do not change the existing
+ * behavior except for adding unnecessary delay by the ftrace handler.
+ *
+ * It is safe even when the transition was forced. The ftrace handler
+ * will see a valid ops->func_stack entry thanks to RCU.
+ *
+ * We could even free the NOPs structures. They must be the last entry
+ * in ops->func_stack. Therefore unregister_ftrace_function() is called.
+ * It does the same as klp_synchronize_transition() to make sure that
+ * nobody is inside the ftrace handler once the operation finishes.
+ *
+ * IMPORTANT: It must be called right after removing the replaced patches!
+ */
+void klp_discard_nops(struct klp_patch *new_patch)
+{
+	klp_unpatch_objects_dynamic(klp_transition_patch);
+	klp_free_objects_dynamic(klp_transition_patch);
+}
+
+/*
  * Remove parts of patches that touch a given kernel module. The list of
  * patches processed might be limited. When limit is NULL, all patches
  * will be handled.
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index f6a853adcc00..e6200f38701f 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -9,6 +9,7 @@ extern struct list_head klp_patches;
 
 void klp_free_patch_start(struct klp_patch *patch);
 void klp_discard_replaced_patches(struct klp_patch *new_patch);
+void klp_discard_nops(struct klp_patch *new_patch);
 
 static inline bool klp_is_object_loaded(struct klp_object *obj)
 {
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 0ff466ab4b5a..99cb3ad05eb4 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -246,15 +246,26 @@ static int klp_patch_func(struct klp_func *func)
 	return ret;
 }
 
-void klp_unpatch_object(struct klp_object *obj)
+static void __klp_unpatch_object(struct klp_object *obj, bool nops_only)
 {
 	struct klp_func *func;
 
-	klp_for_each_func(obj, func)
+	klp_for_each_func(obj, func) {
+		if (nops_only && !func->nop)
+			continue;
+
 		if (func->patched)
 			klp_unpatch_func(func);
+	}
 
-	obj->patched = false;
+	if (obj->dynamic || !nops_only)
+		obj->patched = false;
+}
+
+
+void klp_unpatch_object(struct klp_object *obj)
+{
+	__klp_unpatch_object(obj, false);
 }
 
 int klp_patch_object(struct klp_object *obj)
@@ -277,11 +288,21 @@ int klp_patch_object(struct klp_object *obj)
 	return 0;
 }
 
-void klp_unpatch_objects(struct klp_patch *patch)
+static void __klp_unpatch_objects(struct klp_patch *patch, bool nops_only)
 {
 	struct klp_object *obj;
 
 	klp_for_each_object(patch, obj)
 		if (obj->patched)
-			klp_unpatch_object(obj);
+			__klp_unpatch_object(obj, nops_only);
+}
+
+void klp_unpatch_objects(struct klp_patch *patch)
+{
+	__klp_unpatch_objects(patch, false);
+}
+
+void klp_unpatch_objects_dynamic(struct klp_patch *patch)
+{
+	__klp_unpatch_objects(patch, true);
 }
diff --git a/kernel/livepatch/patch.h b/kernel/livepatch/patch.h
index a9b16e513656..d5f2fbe373e0 100644
--- a/kernel/livepatch/patch.h
+++ b/kernel/livepatch/patch.h
@@ -30,5 +30,6 @@ struct klp_ops *klp_find_ops(void *old_func);
 int klp_patch_object(struct klp_object *obj);
 void klp_unpatch_object(struct klp_object *obj);
 void klp_unpatch_objects(struct klp_patch *patch);
+void klp_unpatch_objects_dynamic(struct klp_patch *patch);
 
 #endif /* _LIVEPATCH_PATCH_H */
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index f4c5908a9731..300273819674 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -85,8 +85,10 @@ static void klp_complete_transition(void)
 		 klp_transition_patch->mod->name,
 		 klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
 
-	if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED)
+	if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) {
 		klp_discard_replaced_patches(klp_transition_patch);
+		klp_discard_nops(klp_transition_patch);
+	}
 
 	if (klp_target_state == KLP_UNPATCHED) {
 		/*
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v15 09/11] livepatch: Atomic replace and cumulative patches documentation
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
                   ` (7 preceding siblings ...)
  2019-01-09 12:43 ` [PATCH v15 08/11] livepatch: Remove Nop structures when unused Petr Mladek
@ 2019-01-09 12:43 ` Petr Mladek
  2019-01-09 12:43 ` [PATCH v15 10/11] livepatch: Remove ordering (stacking) of the livepatches Petr Mladek
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 20+ messages in thread
From: Petr Mladek @ 2019-01-09 12:43 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Evgenii Shatokhin, live-patching,
	linux-kernel, Petr Mladek

User documentation for the atomic replace feature. It makes it easier
to maintain livepatches using so-called cumulative patches.

Signed-off-by: Petr Mladek <pmladek@suse.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
---
 Documentation/livepatch/cumulative-patches.txt | 105 +++++++++++++++++++++++++
 Documentation/livepatch/livepatch.txt          |   2 +
 2 files changed, 107 insertions(+)
 create mode 100644 Documentation/livepatch/cumulative-patches.txt

diff --git a/Documentation/livepatch/cumulative-patches.txt b/Documentation/livepatch/cumulative-patches.txt
new file mode 100644
index 000000000000..e7cf5be69f23
--- /dev/null
+++ b/Documentation/livepatch/cumulative-patches.txt
@@ -0,0 +1,105 @@
+===================================
+Atomic Replace & Cumulative Patches
+===================================
+
+There might be dependencies between livepatches. If multiple patches need
+to do different changes to the same function(s) then we need to define
+an order in which the patches will be installed. And function implementations
+from any newer livepatch must be done on top of the older ones.
+
+This might become a maintenance nightmare. Especially if anyone would want
+to remove a patch that is in the middle of the stack.
+
+An elegant solution comes with the feature called "Atomic Replace". It allows
+creation of so called "Cumulative Patches". They include all wanted changes
+from all older livepatches and completely replace them in one transition.
+
+Usage
+-----
+
+The atomic replace can be enabled by setting "replace" flag in struct klp_patch,
+for example:
+
+	static struct klp_patch patch = {
+		.mod = THIS_MODULE,
+		.objs = objs,
+		.replace = true,
+	};
+
+Such a patch is added on top of the livepatch stack when enabled.
+
+All processes are then migrated to use the code only from the new patch.
+Once the transition is finished, all older patches are automatically
+disabled and removed from the stack of patches.
+
+Ftrace handlers are transparently removed from functions that are no
+longer modified by the new cumulative patch.
+
+As a result, the livepatch authors might maintain sources only for one
+cumulative patch. It helps to keep the patch consistent while adding or
+removing various fixes or features.
+
+Users could keep only the last patch installed on the system after
+the transition to has finished. It helps to clearly see what code is
+actually in use. Also the livepatch might then be seen as a "normal"
+module that modifies the kernel behavior. The only difference is that
+it can be updated at runtime without breaking its functionality.
+
+
+Features
+--------
+
+The atomic replace allows:
+
+  + Atomically revert some functions in a previous patch while
+    upgrading other functions.
+
+  + Remove eventual performance impact caused by core redirection
+    for functions that are no longer patched.
+
+  + Decrease user confusion about stacking order and what code
+    is actually in use.
+
+
+Limitations:
+------------
+
+  + Once the operation finishes, there is no straightforward way
+    to reverse it and restore the replaced patches atomically.
+
+    A good practice is to set .replace flag in any released livepatch.
+    Then re-adding an older livepatch is equivalent to downgrading
+    to that patch. This is safe as long as the livepatches do _not_ do
+    extra modifications in (un)patching callbacks or in the module_init()
+    or module_exit() functions, see below.
+
+    Also note that the replaced patch can be removed and loaded again
+    only when the transition was not forced.
+
+
+  + Only the (un)patching callbacks from the _new_ cumulative livepatch are
+    executed. Any callbacks from the replaced patches are ignored.
+
+    In other words, the cumulative patch is responsible for doing any actions
+    that are necessary to properly replace any older patch.
+
+    As a result, it might be dangerous to replace newer cumulative patches by
+    older ones. The old livepatches might not provide the necessary callbacks.
+
+    This might be seen as a limitation in some scenarios. But it makes life
+    easier in many others. Only the new cumulative livepatch knows what
+    fixes/features are added/removed and what special actions are necessary
+    for a smooth transition.
+
+    In any case, it would be a nightmare to think about the order of
+    the various callbacks and their interactions if the callbacks from all
+    enabled patches were called.
+
+
+  + There is no special handling of shadow variables. Livepatch authors
+    must create their own rules how to pass them from one cumulative
+    patch to the other. Especially that they should not blindly remove
+    them in module_exit() functions.
+
+    A good practice might be to remove shadow variables in the post-unpatch
+    callback. It is called only when the livepatch is properly disabled.
diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt
index 2a70f43166f6..6f32d6ea2fcb 100644
--- a/Documentation/livepatch/livepatch.txt
+++ b/Documentation/livepatch/livepatch.txt
@@ -365,6 +365,8 @@ the ftrace handler is unregistered and the struct klp_ops is
 freed when the related function is not modified by the new patch
 and func_stack list becomes empty.
 
+See Documentation/livepatch/cumulative-patches.txt for more details.
+
 
 5.4. Disabling
 --------------
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v15 10/11] livepatch: Remove ordering (stacking) of the livepatches
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
                   ` (8 preceding siblings ...)
  2019-01-09 12:43 ` [PATCH v15 09/11] livepatch: Atomic replace and cumulative patches documentation Petr Mladek
@ 2019-01-09 12:43 ` Petr Mladek
  2019-01-10 13:56   ` Miroslav Benes
  2019-01-09 12:43 ` [PATCH v15 11/11] selftests/livepatch: introduce tests Petr Mladek
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 20+ messages in thread
From: Petr Mladek @ 2019-01-09 12:43 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Evgenii Shatokhin, live-patching,
	linux-kernel, Petr Mladek

The atomic replace and cumulative patches were introduced as a more secure
way to handle dependent patches. They simplify the logic:

  + Any new cumulative patch is supposed to take over shadow variables
    and changes made by callbacks from previous livepatches.

  + All replaced patches are discarded and the modules can be unloaded.
    As a result, there is only one scenario when a cumulative livepatch
    gets disabled.

The different handling of "normal" and cumulative patches might cause
confusion. It would make sense to keep only one mode. On the other hand,
it would be rude to enforce using the cumulative livepatches even for
trivial and independent (hot) fixes.

However, the stack of patches is not really necessary any longer.
The patch ordering was never clearly visible via the sysfs interface.
Also the "normal" patches need a lot of caution anyway.

Note that the list of enabled patches is still necessary but the ordering
is not longer enforced.

Otherwise, the code is ready to disable livepatches in an random order.
Namely, klp_check_stack_func() always looks for the function from
the livepatch that is being disabled. klp_func structures are just
removed from the related func_stack. Finally, the ftrace handlers
is removed only when the func_stack becomes empty.

Signed-off-by: Petr Mladek <pmladek@suse.com>
---
 Documentation/livepatch/cumulative-patches.txt | 11 ++++-------
 Documentation/livepatch/livepatch.txt          | 13 +++++++------
 kernel/livepatch/core.c                        |  4 ----
 3 files changed, 11 insertions(+), 17 deletions(-)

diff --git a/Documentation/livepatch/cumulative-patches.txt b/Documentation/livepatch/cumulative-patches.txt
index e7cf5be69f23..0012808e8d44 100644
--- a/Documentation/livepatch/cumulative-patches.txt
+++ b/Documentation/livepatch/cumulative-patches.txt
@@ -7,8 +7,8 @@ to do different changes to the same function(s) then we need to define
 an order in which the patches will be installed. And function implementations
 from any newer livepatch must be done on top of the older ones.
 
-This might become a maintenance nightmare. Especially if anyone would want
-to remove a patch that is in the middle of the stack.
+This might become a maintenance nightmare. Especially when more patches
+modified the same function in different ways.
 
 An elegant solution comes with the feature called "Atomic Replace". It allows
 creation of so called "Cumulative Patches". They include all wanted changes
@@ -26,11 +26,9 @@ for example:
 		.replace = true,
 	};
 
-Such a patch is added on top of the livepatch stack when enabled.
-
 All processes are then migrated to use the code only from the new patch.
 Once the transition is finished, all older patches are automatically
-disabled and removed from the stack of patches.
+disabled.
 
 Ftrace handlers are transparently removed from functions that are no
 longer modified by the new cumulative patch.
@@ -57,8 +55,7 @@ The atomic replace allows:
   + Remove eventual performance impact caused by core redirection
     for functions that are no longer patched.
 
-  + Decrease user confusion about stacking order and what code
-    is actually in use.
+  + Decrease user confusion about dependencies between livepatches.
 
 
 Limitations:
diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt
index 6f32d6ea2fcb..71d7f286ec4d 100644
--- a/Documentation/livepatch/livepatch.txt
+++ b/Documentation/livepatch/livepatch.txt
@@ -143,9 +143,9 @@ without HAVE_RELIABLE_STACKTRACE are not considered fully supported by
 the kernel livepatching.
 
 The /sys/kernel/livepatch/<patch>/transition file shows whether a patch
-is in transition.  Only a single patch (the topmost patch on the stack)
-can be in transition at a given time.  A patch can remain in transition
-indefinitely, if any of the tasks are stuck in the initial patch state.
+is in transition.  Only a single patch can be in transition at a given
+time.  A patch can remain in transition indefinitely, if any of the tasks
+are stuck in the initial patch state.
 
 A transition can be reversed and effectively canceled by writing the
 opposite value to the /sys/kernel/livepatch/<patch>/enabled file while
@@ -351,6 +351,10 @@ to '0'.
     The right implementation is selected by the ftrace handler, see
     the "Consistency model" section.
 
+    That said, it is highly recommended to use cumulative livepatches
+    because they help keeping the consistency of all changes. In this case,
+    functions might be patched two times only during the transition period.
+
 
 5.3. Replacing
 --------------
@@ -389,9 +393,6 @@ becomes empty.
 
 Third, the sysfs interface is destroyed.
 
-Note that patches must be disabled in exactly the reverse order in which
-they were enabled. It makes the problem and the implementation much easier.
-
 
 5.5. Removing
 -------------
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 113645ee86b6..adca5cf07f7e 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -925,10 +925,6 @@ static int __klp_disable_patch(struct klp_patch *patch)
 	if (klp_transition_patch)
 		return -EBUSY;
 
-	/* enforce stacking: only the last enabled patch can be disabled */
-	if (!list_is_last(&patch->list, &klp_patches))
-		return -EBUSY;
-
 	klp_init_transition(patch, KLP_UNPATCHED);
 
 	klp_for_each_object(patch, obj)
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v15 11/11] selftests/livepatch: introduce tests
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
                   ` (9 preceding siblings ...)
  2019-01-09 12:43 ` [PATCH v15 10/11] livepatch: Remove ordering (stacking) of the livepatches Petr Mladek
@ 2019-01-09 12:43 ` Petr Mladek
  2019-01-11 17:44 ` [PATCH v15 00/11] livepatch: Atomic replace feature Josh Poimboeuf
  2019-01-11 19:56 ` Jiri Kosina
  12 siblings, 0 replies; 20+ messages in thread
From: Petr Mladek @ 2019-01-09 12:43 UTC (permalink / raw)
  To: Jiri Kosina, Josh Poimboeuf, Miroslav Benes
  Cc: Jason Baron, Joe Lawrence, Evgenii Shatokhin, live-patching,
	linux-kernel, Petr Mladek

From: Joe Lawrence <joe.lawrence@redhat.com>

Add a few livepatch modules and simple target modules that the included
regression suite can run tests against:

  - basic livepatching (multiple patches, atomic replace)
  - pre/post (un)patch callbacks
  - shadow variable API

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Tested-by: Miroslav Benes <mbenes@suse.cz>
Tested-by: Alice Ferrazzi <alice.ferrazzi@gmail.com>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
---
 Documentation/livepatch/callbacks.txt              | 489 +----------------
 MAINTAINERS                                        |   1 +
 lib/Kconfig.debug                                  |  22 +-
 lib/Makefile                                       |   2 +
 lib/livepatch/Makefile                             |  15 +
 lib/livepatch/test_klp_atomic_replace.c            |  57 ++
 lib/livepatch/test_klp_callbacks_busy.c            |  43 ++
 lib/livepatch/test_klp_callbacks_demo.c            | 121 +++++
 lib/livepatch/test_klp_callbacks_demo2.c           |  93 ++++
 lib/livepatch/test_klp_callbacks_mod.c             |  24 +
 lib/livepatch/test_klp_livepatch.c                 |  51 ++
 lib/livepatch/test_klp_shadow_vars.c               | 236 +++++++++
 tools/testing/selftests/Makefile                   |   1 +
 tools/testing/selftests/livepatch/Makefile         |   8 +
 tools/testing/selftests/livepatch/README           |  43 ++
 tools/testing/selftests/livepatch/config           |   1 +
 tools/testing/selftests/livepatch/functions.sh     | 203 +++++++
 .../testing/selftests/livepatch/test-callbacks.sh  | 587 +++++++++++++++++++++
 .../testing/selftests/livepatch/test-livepatch.sh  | 168 ++++++
 .../selftests/livepatch/test-shadow-vars.sh        |  60 +++
 20 files changed, 1740 insertions(+), 485 deletions(-)
 create mode 100644 lib/livepatch/Makefile
 create mode 100644 lib/livepatch/test_klp_atomic_replace.c
 create mode 100644 lib/livepatch/test_klp_callbacks_busy.c
 create mode 100644 lib/livepatch/test_klp_callbacks_demo.c
 create mode 100644 lib/livepatch/test_klp_callbacks_demo2.c
 create mode 100644 lib/livepatch/test_klp_callbacks_mod.c
 create mode 100644 lib/livepatch/test_klp_livepatch.c
 create mode 100644 lib/livepatch/test_klp_shadow_vars.c
 create mode 100644 tools/testing/selftests/livepatch/Makefile
 create mode 100644 tools/testing/selftests/livepatch/README
 create mode 100644 tools/testing/selftests/livepatch/config
 create mode 100644 tools/testing/selftests/livepatch/functions.sh
 create mode 100755 tools/testing/selftests/livepatch/test-callbacks.sh
 create mode 100755 tools/testing/selftests/livepatch/test-livepatch.sh
 create mode 100755 tools/testing/selftests/livepatch/test-shadow-vars.sh

diff --git a/Documentation/livepatch/callbacks.txt b/Documentation/livepatch/callbacks.txt
index c9776f48e458..182e31d4abce 100644
--- a/Documentation/livepatch/callbacks.txt
+++ b/Documentation/livepatch/callbacks.txt
@@ -118,488 +118,9 @@ similar change to their hw_features value.  (Client functions of the
 value may need to be updated accordingly.)
 
 
-Test cases
-==========
-
-What follows is not an exhaustive test suite of every possible livepatch
-pre/post-(un)patch combination, but a selection that demonstrates a few
-important concepts.  Each test case uses the kernel modules located in
-the samples/livepatch/ and assumes that no livepatches are loaded at the
-beginning of the test.
-
-
-Test 1
-------
-
-Test a combination of loading a kernel module and a livepatch that
-patches a function in the first module.  (Un)load the target module
-before the livepatch module:
-
-- load target module
-- load livepatch
-- disable livepatch
-- unload target module
-- unload livepatch
-
-First load a target module:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   34.475708] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-On livepatch enable, before the livepatch transition starts, pre-patch
-callbacks are executed for vmlinux and livepatch_callbacks_mod (those
-klp_objects currently loaded).  After klp_objects are patched according
-to the klp_patch, their post-patch callbacks run and the transition
-completes:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   36.503719] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   36.504213] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   36.504238] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   36.504721] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   36.505849] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   37.727133] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   37.727232] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   37.727860] livepatch_callbacks_demo: post_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   37.728792] livepatch: 'livepatch_callbacks_demo': patching complete
-
-Similarly, on livepatch disable, pre-patch callbacks run before the
-unpatching transition starts.  klp_objects are reverted, post-patch
-callbacks execute and the transition completes:
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [   38.510209] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [   38.510234] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [   38.510982] livepatch_callbacks_demo: pre_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   38.512209] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [   39.711132] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   39.711210] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [   39.711779] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   39.712735] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   42.534183] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-
-
-Test 2
-------
-
-This test is similar to the previous test, but (un)load the livepatch
-module before the target kernel module.  This tests the livepatch core's
-module_coming handler:
-
-- load livepatch
-- load target module
-- disable livepatch
-- unload livepatch
-- unload target module
-
-
-On livepatch enable, only pre/post-patch callbacks are executed for
-currently loaded klp_objects, in this case, vmlinux:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   44.553328] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   44.553997] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   44.554049] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   44.554845] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   45.727128] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   45.727212] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   45.727961] livepatch: 'livepatch_callbacks_demo': patching complete
-
-When a targeted module is subsequently loaded, only its pre/post-patch
-callbacks are executed:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   46.560845] livepatch: applying patch 'livepatch_callbacks_demo' to loading module 'livepatch_callbacks_mod'
-  [   46.561988] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [   46.563452] livepatch_callbacks_demo: post_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [   46.565495] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-On livepatch disable, all currently loaded klp_objects' (vmlinux and
-livepatch_callbacks_mod) pre/post-unpatch callbacks are executed:
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [   48.568885] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [   48.568910] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [   48.569441] livepatch_callbacks_demo: pre_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   48.570502] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [   49.759091] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   49.759171] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [   49.759742] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   49.760690] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   52.592283] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-
-
-Test 3
-------
-
-Test loading the livepatch after a targeted kernel module, then unload
-the kernel module before disabling the livepatch.  This tests the
-livepatch core's module_going handler:
-
-- load target module
-- load livepatch
-- unload target module
-- disable livepatch
-- unload livepatch
-
-First load a target module, then the livepatch:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   54.607948] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   56.613919] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   56.614411] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   56.614436] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   56.614818] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   56.615656] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   57.759070] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   57.759147] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   57.759621] livepatch_callbacks_demo: post_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_LIVE] Normal state
-  [   57.760307] livepatch: 'livepatch_callbacks_demo': patching complete
-
-When a target module is unloaded, the livepatch is only reverted from
-that klp_object (livepatch_callbacks_mod).  As such, only its pre and
-post-unpatch callbacks are executed when this occurs:
-
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   58.623409] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-  [   58.623903] livepatch_callbacks_demo: pre_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-  [   58.624658] livepatch: reverting patch 'livepatch_callbacks_demo' on unloading module 'livepatch_callbacks_mod'
-  [   58.625305] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-
-When the livepatch is disabled, pre and post-unpatch callbacks are run
-for the remaining klp_object, vmlinux:
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [   60.638420] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [   60.638444] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [   60.638996] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [   61.727088] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   61.727165] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [   61.727985] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-
-
-Test 4
-------
-
-This test is similar to the previous test, however the livepatch is
-loaded first.  This tests the livepatch core's module_coming and
-module_going handlers:
-
-- load livepatch
-- load target module
-- unload target module
-- disable livepatch
-- unload livepatch
-
-First load the livepatch:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   64.661552] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   64.662147] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   64.662175] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   64.662850] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   65.695056] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   65.695147] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   65.695561] livepatch: 'livepatch_callbacks_demo': patching complete
-
-When a targeted kernel module is subsequently loaded, only its
-pre/post-patch callbacks are executed:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   66.669196] livepatch: applying patch 'livepatch_callbacks_demo' to loading module 'livepatch_callbacks_mod'
-  [   66.669882] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [   66.670744] livepatch_callbacks_demo: post_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [   66.672873] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-When the target module is unloaded, the livepatch is only reverted from
-the livepatch_callbacks_mod klp_object.  As such, only pre and
-post-unpatch callbacks are executed when this occurs:
-
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   68.680065] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-  [   68.680688] livepatch_callbacks_demo: pre_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-  [   68.681452] livepatch: reverting patch 'livepatch_callbacks_demo' on unloading module 'livepatch_callbacks_mod'
-  [   68.682094] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [   70.689225] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [   70.689256] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [   70.689882] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [   71.711080] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   71.711481] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [   71.711988] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-
-
-Test 5
-------
-
-A simple test of loading a livepatch without one of its patch target
-klp_objects ever loaded (livepatch_callbacks_mod):
-
-- load livepatch
-- disable livepatch
-- unload livepatch
-
-Load the livepatch:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   74.711081] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   74.711595] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   74.711639] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   74.712272] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   75.743137] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   75.743219] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   75.743867] livepatch: 'livepatch_callbacks_demo': patching complete
-
-As expected, only pre/post-(un)patch handlers are executed for vmlinux:
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [   76.716254] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [   76.716278] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [   76.716666] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [   77.727089] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   77.727194] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [   77.727907] livepatch: 'livepatch_callbacks_demo': unpatching complete
+Other Examples
+==============
 
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-
-
-Test 6
-------
-
-Test a scenario where a vmlinux pre-patch callback returns a non-zero
-status (ie, failure):
-
-- load target module
-- load livepatch -ENODEV
-- unload target module
-
-First load a target module:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   80.740520] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-Load the livepatch module, setting its 'pre_patch_ret' value to -19
-(-ENODEV).  When its vmlinux pre-patch callback executed, this status
-code will propagate back to the module-loading subsystem.  The result is
-that the insmod command refuses to load the livepatch module:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko pre_patch_ret=-19
-  [   82.747326] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   82.747743] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   82.747767] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   82.748237] livepatch: pre-patch callback failed for object 'vmlinux'
-  [   82.748637] livepatch: failed to enable patch 'livepatch_callbacks_demo'
-  [   82.749059] livepatch: 'livepatch_callbacks_demo': canceling transition, going to unpatch
-  [   82.749060] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   82.749868] livepatch: 'livepatch_callbacks_demo': unpatching complete
-  [   82.765809] insmod: ERROR: could not insert module samples/livepatch/livepatch-callbacks-demo.ko: No such device
-
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   84.774238] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-
-
-Test 7
-------
-
-Similar to the previous test, setup a livepatch such that its vmlinux
-pre-patch callback returns success.  However, when a targeted kernel
-module is later loaded, have the livepatch return a failing status code:
-
-- load livepatch
-- setup -ENODEV
-- load target module
-- disable livepatch
-- unload livepatch
-
-Load the livepatch, notice vmlinux pre-patch callback succeeds:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   86.787845] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   86.788325] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   86.788427] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   86.788821] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   87.711069] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   87.711143] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   87.711886] livepatch: 'livepatch_callbacks_demo': patching complete
-
-Set a trap so subsequent pre-patch callbacks to this livepatch will
-return -ENODEV:
-
-  % echo -19 > /sys/module/livepatch_callbacks_demo/parameters/pre_patch_ret
-
-The livepatch pre-patch callback for subsequently loaded target modules
-will return failure, so the module loader refuses to load the kernel
-module.  Notice that no post-patch or pre/post-unpatch callbacks are
-executed for this klp_object:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [   90.796976] livepatch: applying patch 'livepatch_callbacks_demo' to loading module 'livepatch_callbacks_mod'
-  [   90.797834] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [   90.798900] livepatch: pre-patch callback failed for object 'livepatch_callbacks_mod'
-  [   90.799652] livepatch: patch 'livepatch_callbacks_demo' failed for module 'livepatch_callbacks_mod', refusing to load module 'livepatch_callbacks_mod'
-  [   90.819737] insmod: ERROR: could not insert module samples/livepatch/livepatch-callbacks-mod.ko: No such device
-
-However, pre/post-unpatch callbacks run for the vmlinux klp_object:
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [   92.823547] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [   92.823573] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [   92.824331] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [   93.727128] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [   93.727327] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [   93.727861] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-
-
-Test 8
-------
-
-Test loading multiple targeted kernel modules.  This test-case is
-mainly for comparing with the next test-case.
-
-- load busy target module (0s sleep),
-- load livepatch
-- load target module
-- unload target module
-- disable livepatch
-- unload livepatch
-- unload busy target module
-
-
-Load a target "busy" kernel module which kicks off a worker function
-that immediately exits:
-
-  % insmod samples/livepatch/livepatch-callbacks-busymod.ko sleep_secs=0
-  [   96.910107] livepatch_callbacks_busymod: livepatch_callbacks_mod_init
-  [   96.910600] livepatch_callbacks_busymod: busymod_work_func, sleeping 0 seconds ...
-  [   96.913024] livepatch_callbacks_busymod: busymod_work_func exit
-
-Proceed with loading the livepatch and another ordinary target module,
-notice that the post-patch callbacks are executed and the transition
-completes quickly:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [   98.917892] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [   98.918426] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [   98.918453] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [   98.918955] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_busymod -> [MODULE_STATE_LIVE] Normal state
-  [   98.923835] livepatch: 'livepatch_callbacks_demo': starting patching transition
-  [   99.743104] livepatch: 'livepatch_callbacks_demo': completing patching transition
-  [   99.743156] livepatch_callbacks_demo: post_patch_callback: vmlinux
-  [   99.743679] livepatch_callbacks_demo: post_patch_callback: livepatch_callbacks_busymod -> [MODULE_STATE_LIVE] Normal state
-  [   99.744616] livepatch: 'livepatch_callbacks_demo': patching complete
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [  100.930955] livepatch: applying patch 'livepatch_callbacks_demo' to loading module 'livepatch_callbacks_mod'
-  [  100.931668] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [  100.932645] livepatch_callbacks_demo: post_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [  100.934125] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [  102.942805] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-  [  102.943640] livepatch_callbacks_demo: pre_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-  [  102.944585] livepatch: reverting patch 'livepatch_callbacks_demo' on unloading module 'livepatch_callbacks_mod'
-  [  102.945455] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [  104.953815] livepatch: 'livepatch_callbacks_demo': initializing unpatching transition
-  [  104.953838] livepatch_callbacks_demo: pre_unpatch_callback: vmlinux
-  [  104.954431] livepatch_callbacks_demo: pre_unpatch_callback: livepatch_callbacks_busymod -> [MODULE_STATE_LIVE] Normal state
-  [  104.955426] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [  106.719073] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [  106.722633] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [  106.723282] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_busymod -> [MODULE_STATE_LIVE] Normal state
-  [  106.724279] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-  % rmmod samples/livepatch/livepatch-callbacks-busymod.ko
-  [  108.975660] livepatch_callbacks_busymod: livepatch_callbacks_mod_exit
-
-
-Test 9
-------
-
-A similar test as the previous one, but force the "busy" kernel module
-to do longer work.
-
-The livepatching core will refuse to patch a task that is currently
-executing a to-be-patched function -- the consistency model stalls the
-current patch transition until this safety-check is met.  Test a
-scenario where one of a livepatch's target klp_objects sits on such a
-function for a long time.  Meanwhile, load and unload other target
-kernel modules while the livepatch transition is in progress.
-
-- load busy target module (30s sleep)
-- load livepatch
-- load target module
-- unload target module
-- disable livepatch
-- unload livepatch
-- unload busy target module
-
-
-Load the "busy" kernel module, this time make it do 30 seconds worth of
-work:
-
-  % insmod samples/livepatch/livepatch-callbacks-busymod.ko sleep_secs=30
-  [  110.993362] livepatch_callbacks_busymod: livepatch_callbacks_mod_init
-  [  110.994059] livepatch_callbacks_busymod: busymod_work_func, sleeping 30 seconds ...
-
-Meanwhile, the livepatch is loaded.  Notice that the patch transition
-does not complete as the targeted "busy" module is sitting on a
-to-be-patched function:
-
-  % insmod samples/livepatch/livepatch-callbacks-demo.ko
-  [  113.000309] livepatch: enabling patch 'livepatch_callbacks_demo'
-  [  113.000764] livepatch: 'livepatch_callbacks_demo': initializing patching transition
-  [  113.000791] livepatch_callbacks_demo: pre_patch_callback: vmlinux
-  [  113.001289] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_busymod -> [MODULE_STATE_LIVE] Normal state
-  [  113.005208] livepatch: 'livepatch_callbacks_demo': starting patching transition
-
-Load a second target module (this one is an ordinary idle kernel
-module).  Note that *no* post-patch callbacks will be executed while the
-livepatch is still in transition:
-
-  % insmod samples/livepatch/livepatch-callbacks-mod.ko
-  [  115.012740] livepatch: applying patch 'livepatch_callbacks_demo' to loading module 'livepatch_callbacks_mod'
-  [  115.013406] livepatch_callbacks_demo: pre_patch_callback: livepatch_callbacks_mod -> [MODULE_STATE_COMING] Full formed, running module_init
-  [  115.015315] livepatch_callbacks_mod: livepatch_callbacks_mod_init
-
-Request an unload of the simple kernel module.  The patch is still
-transitioning, so its pre-unpatch callbacks are skipped:
-
-  % rmmod samples/livepatch/livepatch-callbacks-mod.ko
-  [  117.022626] livepatch_callbacks_mod: livepatch_callbacks_mod_exit
-  [  117.023376] livepatch: reverting patch 'livepatch_callbacks_demo' on unloading module 'livepatch_callbacks_mod'
-  [  117.024533] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_mod -> [MODULE_STATE_GOING] Going away
-
-Finally the livepatch is disabled.  Since none of the patch's
-klp_object's post-patch callbacks executed, the remaining klp_object's
-pre-unpatch callbacks are skipped:
-
-  % echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
-  [  119.035408] livepatch: 'livepatch_callbacks_demo': reversing transition from patching to unpatching
-  [  119.035485] livepatch: 'livepatch_callbacks_demo': starting unpatching transition
-  [  119.711166] livepatch: 'livepatch_callbacks_demo': completing unpatching transition
-  [  119.714179] livepatch_callbacks_demo: post_unpatch_callback: vmlinux
-  [  119.714653] livepatch_callbacks_demo: post_unpatch_callback: livepatch_callbacks_busymod -> [MODULE_STATE_LIVE] Normal state
-  [  119.715437] livepatch: 'livepatch_callbacks_demo': unpatching complete
-
-  % rmmod samples/livepatch/livepatch-callbacks-demo.ko
-  % rmmod samples/livepatch/livepatch-callbacks-busymod.ko
-  [  141.279111] livepatch_callbacks_busymod: busymod_work_func exit
-  [  141.279760] livepatch_callbacks_busymod: livepatch_callbacks_mod_exit
+Sample livepatch modules demonstrating the callback API can be found in
+samples/livepatch/ directory.  These samples were modified for use in
+kselftests and can be found in the lib/livepatch directory.
diff --git a/MAINTAINERS b/MAINTAINERS
index 99113b9fcdd2..9faeabf5f3a7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8805,6 +8805,7 @@ F:	arch/x86/kernel/livepatch.c
 F:	Documentation/livepatch/
 F:	Documentation/ABI/testing/sysfs-kernel-livepatch
 F:	samples/livepatch/
+F:	tools/testing/selftests/livepatch/
 L:	live-patching@vger.kernel.org
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching.git
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index d4df5b24d75e..0298eb8eaa4e 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1991,6 +1991,27 @@ config TEST_MEMCAT_P
 
 	  If unsure, say N.
 
+config TEST_LIVEPATCH
+	tristate "Test livepatching"
+	default n
+	depends on LIVEPATCH
+	depends on m
+	help
+	  Test kernel livepatching features for correctness.  The tests will
+	  load test modules that will be livepatched in various scenarios.
+
+	  To run all the livepatching tests:
+
+	  make -C tools/testing/selftests TARGETS=livepatch run_tests
+
+	  Alternatively, individual tests may be invoked:
+
+	  tools/testing/selftests/livepatch/test-callbacks.sh
+	  tools/testing/selftests/livepatch/test-livepatch.sh
+	  tools/testing/selftests/livepatch/test-shadow-vars.sh
+
+	  If unsure, say N.
+
 config TEST_OBJAGG
 	tristate "Perform selftest on object aggreration manager"
 	default n
@@ -1999,7 +2020,6 @@ config TEST_OBJAGG
 	  Enable this option to test object aggregation manager on boot
 	  (or module load).
 
-	  If unsure, say N.
 
 endif # RUNTIME_TESTING_MENU
 
diff --git a/lib/Makefile b/lib/Makefile
index e1b59da71418..a1cfa1975ca3 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -77,6 +77,8 @@ obj-$(CONFIG_TEST_DEBUG_VIRTUAL) += test_debug_virtual.o
 obj-$(CONFIG_TEST_MEMCAT_P) += test_memcat_p.o
 obj-$(CONFIG_TEST_OBJAGG) += test_objagg.o
 
+obj-$(CONFIG_TEST_LIVEPATCH) += livepatch/
+
 ifeq ($(CONFIG_DEBUG_KOBJECT),y)
 CFLAGS_kobject.o += -DDEBUG
 CFLAGS_kobject_uevent.o += -DDEBUG
diff --git a/lib/livepatch/Makefile b/lib/livepatch/Makefile
new file mode 100644
index 000000000000..26900ddaef82
--- /dev/null
+++ b/lib/livepatch/Makefile
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for livepatch test code.
+
+obj-$(CONFIG_TEST_LIVEPATCH) += test_klp_atomic_replace.o \
+				test_klp_callbacks_demo.o \
+				test_klp_callbacks_demo2.o \
+				test_klp_callbacks_busy.o \
+				test_klp_callbacks_mod.o \
+				test_klp_livepatch.o \
+				test_klp_shadow_vars.o
+
+# Target modules to be livepatched require CC_FLAGS_FTRACE
+CFLAGS_test_klp_callbacks_busy.o	+= $(CC_FLAGS_FTRACE)
+CFLAGS_test_klp_callbacks_mod.o		+= $(CC_FLAGS_FTRACE)
diff --git a/lib/livepatch/test_klp_atomic_replace.c b/lib/livepatch/test_klp_atomic_replace.c
new file mode 100644
index 000000000000..5af7093ca00c
--- /dev/null
+++ b/lib/livepatch/test_klp_atomic_replace.c
@@ -0,0 +1,57 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/livepatch.h>
+
+static int replace;
+module_param(replace, int, 0644);
+MODULE_PARM_DESC(replace, "replace (default=0)");
+
+#include <linux/seq_file.h>
+static int livepatch_meminfo_proc_show(struct seq_file *m, void *v)
+{
+	seq_printf(m, "%s: %s\n", THIS_MODULE->name,
+		   "this has been live patched");
+	return 0;
+}
+
+static struct klp_func funcs[] = {
+	{
+		.old_name = "meminfo_proc_show",
+		.new_func = livepatch_meminfo_proc_show,
+	}, {}
+};
+
+static struct klp_object objs[] = {
+	{
+		/* name being NULL means vmlinux */
+		.funcs = funcs,
+	}, {}
+};
+
+static struct klp_patch patch = {
+	.mod = THIS_MODULE,
+	.objs = objs,
+	/* set .replace in the init function below for demo purposes */
+};
+
+static int test_klp_atomic_replace_init(void)
+{
+	patch.replace = replace;
+	return klp_enable_patch(&patch);
+}
+
+static void test_klp_atomic_replace_exit(void)
+{
+}
+
+module_init(test_klp_atomic_replace_init);
+module_exit(test_klp_atomic_replace_exit);
+MODULE_LICENSE("GPL");
+MODULE_INFO(livepatch, "Y");
+MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: atomic replace");
diff --git a/lib/livepatch/test_klp_callbacks_busy.c b/lib/livepatch/test_klp_callbacks_busy.c
new file mode 100644
index 000000000000..40beddf8a0e2
--- /dev/null
+++ b/lib/livepatch/test_klp_callbacks_busy.c
@@ -0,0 +1,43 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/workqueue.h>
+#include <linux/delay.h>
+
+static int sleep_secs;
+module_param(sleep_secs, int, 0644);
+MODULE_PARM_DESC(sleep_secs, "sleep_secs (default=0)");
+
+static void busymod_work_func(struct work_struct *work);
+static DECLARE_DELAYED_WORK(work, busymod_work_func);
+
+static void busymod_work_func(struct work_struct *work)
+{
+	pr_info("%s, sleeping %d seconds ...\n", __func__, sleep_secs);
+	msleep(sleep_secs * 1000);
+	pr_info("%s exit\n", __func__);
+}
+
+static int test_klp_callbacks_busy_init(void)
+{
+	pr_info("%s\n", __func__);
+	schedule_delayed_work(&work,
+		msecs_to_jiffies(1000 * 0));
+	return 0;
+}
+
+static void test_klp_callbacks_busy_exit(void)
+{
+	cancel_delayed_work_sync(&work);
+	pr_info("%s\n", __func__);
+}
+
+module_init(test_klp_callbacks_busy_init);
+module_exit(test_klp_callbacks_busy_exit);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: busy target module");
diff --git a/lib/livepatch/test_klp_callbacks_demo.c b/lib/livepatch/test_klp_callbacks_demo.c
new file mode 100644
index 000000000000..3fd8fe1cd1cc
--- /dev/null
+++ b/lib/livepatch/test_klp_callbacks_demo.c
@@ -0,0 +1,121 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/livepatch.h>
+
+static int pre_patch_ret;
+module_param(pre_patch_ret, int, 0644);
+MODULE_PARM_DESC(pre_patch_ret, "pre_patch_ret (default=0)");
+
+static const char *const module_state[] = {
+	[MODULE_STATE_LIVE]	= "[MODULE_STATE_LIVE] Normal state",
+	[MODULE_STATE_COMING]	= "[MODULE_STATE_COMING] Full formed, running module_init",
+	[MODULE_STATE_GOING]	= "[MODULE_STATE_GOING] Going away",
+	[MODULE_STATE_UNFORMED]	= "[MODULE_STATE_UNFORMED] Still setting it up",
+};
+
+static void callback_info(const char *callback, struct klp_object *obj)
+{
+	if (obj->mod)
+		pr_info("%s: %s -> %s\n", callback, obj->mod->name,
+			module_state[obj->mod->state]);
+	else
+		pr_info("%s: vmlinux\n", callback);
+}
+
+/* Executed on object patching (ie, patch enablement) */
+static int pre_patch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+	return pre_patch_ret;
+}
+
+/* Executed on object unpatching (ie, patch disablement) */
+static void post_patch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+}
+
+/* Executed on object unpatching (ie, patch disablement) */
+static void pre_unpatch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+}
+
+/* Executed on object unpatching (ie, patch disablement) */
+static void post_unpatch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+}
+
+static void patched_work_func(struct work_struct *work)
+{
+	pr_info("%s\n", __func__);
+}
+
+static struct klp_func no_funcs[] = {
+	{}
+};
+
+static struct klp_func busymod_funcs[] = {
+	{
+		.old_name = "busymod_work_func",
+		.new_func = patched_work_func,
+	}, {}
+};
+
+static struct klp_object objs[] = {
+	{
+		.name = NULL,	/* vmlinux */
+		.funcs = no_funcs,
+		.callbacks = {
+			.pre_patch = pre_patch_callback,
+			.post_patch = post_patch_callback,
+			.pre_unpatch = pre_unpatch_callback,
+			.post_unpatch = post_unpatch_callback,
+		},
+	},	{
+		.name = "test_klp_callbacks_mod",
+		.funcs = no_funcs,
+		.callbacks = {
+			.pre_patch = pre_patch_callback,
+			.post_patch = post_patch_callback,
+			.pre_unpatch = pre_unpatch_callback,
+			.post_unpatch = post_unpatch_callback,
+		},
+	},	{
+		.name = "test_klp_callbacks_busy",
+		.funcs = busymod_funcs,
+		.callbacks = {
+			.pre_patch = pre_patch_callback,
+			.post_patch = post_patch_callback,
+			.pre_unpatch = pre_unpatch_callback,
+			.post_unpatch = post_unpatch_callback,
+		},
+	}, { }
+};
+
+static struct klp_patch patch = {
+	.mod = THIS_MODULE,
+	.objs = objs,
+};
+
+static int test_klp_callbacks_demo_init(void)
+{
+	return klp_enable_patch(&patch);
+}
+
+static void test_klp_callbacks_demo_exit(void)
+{
+}
+
+module_init(test_klp_callbacks_demo_init);
+module_exit(test_klp_callbacks_demo_exit);
+MODULE_LICENSE("GPL");
+MODULE_INFO(livepatch, "Y");
+MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: livepatch demo");
diff --git a/lib/livepatch/test_klp_callbacks_demo2.c b/lib/livepatch/test_klp_callbacks_demo2.c
new file mode 100644
index 000000000000..5417573e80af
--- /dev/null
+++ b/lib/livepatch/test_klp_callbacks_demo2.c
@@ -0,0 +1,93 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/livepatch.h>
+
+static int replace;
+module_param(replace, int, 0644);
+MODULE_PARM_DESC(replace, "replace (default=0)");
+
+static const char *const module_state[] = {
+	[MODULE_STATE_LIVE]	= "[MODULE_STATE_LIVE] Normal state",
+	[MODULE_STATE_COMING]	= "[MODULE_STATE_COMING] Full formed, running module_init",
+	[MODULE_STATE_GOING]	= "[MODULE_STATE_GOING] Going away",
+	[MODULE_STATE_UNFORMED]	= "[MODULE_STATE_UNFORMED] Still setting it up",
+};
+
+static void callback_info(const char *callback, struct klp_object *obj)
+{
+	if (obj->mod)
+		pr_info("%s: %s -> %s\n", callback, obj->mod->name,
+			module_state[obj->mod->state]);
+	else
+		pr_info("%s: vmlinux\n", callback);
+}
+
+/* Executed on object patching (ie, patch enablement) */
+static int pre_patch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+	return 0;
+}
+
+/* Executed on object unpatching (ie, patch disablement) */
+static void post_patch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+}
+
+/* Executed on object unpatching (ie, patch disablement) */
+static void pre_unpatch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+}
+
+/* Executed on object unpatching (ie, patch disablement) */
+static void post_unpatch_callback(struct klp_object *obj)
+{
+	callback_info(__func__, obj);
+}
+
+static struct klp_func no_funcs[] = {
+	{ }
+};
+
+static struct klp_object objs[] = {
+	{
+		.name = NULL,	/* vmlinux */
+		.funcs = no_funcs,
+		.callbacks = {
+			.pre_patch = pre_patch_callback,
+			.post_patch = post_patch_callback,
+			.pre_unpatch = pre_unpatch_callback,
+			.post_unpatch = post_unpatch_callback,
+		},
+	}, { }
+};
+
+static struct klp_patch patch = {
+	.mod = THIS_MODULE,
+	.objs = objs,
+	/* set .replace in the init function below for demo purposes */
+};
+
+static int test_klp_callbacks_demo2_init(void)
+{
+	patch.replace = replace;
+	return klp_enable_patch(&patch);
+}
+
+static void test_klp_callbacks_demo2_exit(void)
+{
+}
+
+module_init(test_klp_callbacks_demo2_init);
+module_exit(test_klp_callbacks_demo2_exit);
+MODULE_LICENSE("GPL");
+MODULE_INFO(livepatch, "Y");
+MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: livepatch demo2");
diff --git a/lib/livepatch/test_klp_callbacks_mod.c b/lib/livepatch/test_klp_callbacks_mod.c
new file mode 100644
index 000000000000..8fbe645b1c2c
--- /dev/null
+++ b/lib/livepatch/test_klp_callbacks_mod.c
@@ -0,0 +1,24 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+
+static int test_klp_callbacks_mod_init(void)
+{
+	pr_info("%s\n", __func__);
+	return 0;
+}
+
+static void test_klp_callbacks_mod_exit(void)
+{
+	pr_info("%s\n", __func__);
+}
+
+module_init(test_klp_callbacks_mod_init);
+module_exit(test_klp_callbacks_mod_exit);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: target module");
diff --git a/lib/livepatch/test_klp_livepatch.c b/lib/livepatch/test_klp_livepatch.c
new file mode 100644
index 000000000000..aff08199de71
--- /dev/null
+++ b/lib/livepatch/test_klp_livepatch.c
@@ -0,0 +1,51 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2014 Seth Jennings <sjenning@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/livepatch.h>
+
+#include <linux/seq_file.h>
+static int livepatch_cmdline_proc_show(struct seq_file *m, void *v)
+{
+	seq_printf(m, "%s: %s\n", THIS_MODULE->name,
+		   "this has been live patched");
+	return 0;
+}
+
+static struct klp_func funcs[] = {
+	{
+		.old_name = "cmdline_proc_show",
+		.new_func = livepatch_cmdline_proc_show,
+	}, { }
+};
+
+static struct klp_object objs[] = {
+	{
+		/* name being NULL means vmlinux */
+		.funcs = funcs,
+	}, { }
+};
+
+static struct klp_patch patch = {
+	.mod = THIS_MODULE,
+	.objs = objs,
+};
+
+static int test_klp_livepatch_init(void)
+{
+	return klp_enable_patch(&patch);
+}
+
+static void test_klp_livepatch_exit(void)
+{
+}
+
+module_init(test_klp_livepatch_init);
+module_exit(test_klp_livepatch_exit);
+MODULE_LICENSE("GPL");
+MODULE_INFO(livepatch, "Y");
+MODULE_AUTHOR("Seth Jennings <sjenning@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: livepatch module");
diff --git a/lib/livepatch/test_klp_shadow_vars.c b/lib/livepatch/test_klp_shadow_vars.c
new file mode 100644
index 000000000000..02f892f941dc
--- /dev/null
+++ b/lib/livepatch/test_klp_shadow_vars.c
@@ -0,0 +1,236 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/livepatch.h>
+#include <linux/slab.h>
+
+/*
+ * Keep a small list of pointers so that we can print address-agnostic
+ * pointer values.  Use a rolling integer count to differentiate the values.
+ * Ironically we could have used the shadow variable API to do this, but
+ * let's not lean too heavily on the very code we're testing.
+ */
+static LIST_HEAD(ptr_list);
+struct shadow_ptr {
+	void *ptr;
+	int id;
+	struct list_head list;
+};
+
+static void free_ptr_list(void)
+{
+	struct shadow_ptr *sp, *tmp_sp;
+
+	list_for_each_entry_safe(sp, tmp_sp, &ptr_list, list) {
+		list_del(&sp->list);
+		kfree(sp);
+	}
+}
+
+static int ptr_id(void *ptr)
+{
+	struct shadow_ptr *sp;
+	static int count;
+
+	list_for_each_entry(sp, &ptr_list, list) {
+		if (sp->ptr == ptr)
+			return sp->id;
+	}
+
+	sp = kmalloc(sizeof(*sp), GFP_ATOMIC);
+	if (!sp)
+		return -1;
+	sp->ptr = ptr;
+	sp->id = count++;
+
+	list_add(&sp->list, &ptr_list);
+
+	return sp->id;
+}
+
+/*
+ * Shadow variable wrapper functions that echo the function and arguments
+ * to the kernel log for testing verification.  Don't display raw pointers,
+ * but use the ptr_id() value instead.
+ */
+static void *shadow_get(void *obj, unsigned long id)
+{
+	void *ret = klp_shadow_get(obj, id);
+
+	pr_info("klp_%s(obj=PTR%d, id=0x%lx) = PTR%d\n",
+		__func__, ptr_id(obj), id, ptr_id(ret));
+
+	return ret;
+}
+
+static void *shadow_alloc(void *obj, unsigned long id, size_t size,
+			  gfp_t gfp_flags, klp_shadow_ctor_t ctor,
+			  void *ctor_data)
+{
+	void *ret = klp_shadow_alloc(obj, id, size, gfp_flags, ctor,
+				     ctor_data);
+	pr_info("klp_%s(obj=PTR%d, id=0x%lx, size=%zx, gfp_flags=%pGg), ctor=PTR%d, ctor_data=PTR%d = PTR%d\n",
+		__func__, ptr_id(obj), id, size, &gfp_flags, ptr_id(ctor),
+		ptr_id(ctor_data), ptr_id(ret));
+	return ret;
+}
+
+static void *shadow_get_or_alloc(void *obj, unsigned long id, size_t size,
+				 gfp_t gfp_flags, klp_shadow_ctor_t ctor,
+				 void *ctor_data)
+{
+	void *ret = klp_shadow_get_or_alloc(obj, id, size, gfp_flags, ctor,
+					    ctor_data);
+	pr_info("klp_%s(obj=PTR%d, id=0x%lx, size=%zx, gfp_flags=%pGg), ctor=PTR%d, ctor_data=PTR%d = PTR%d\n",
+		__func__, ptr_id(obj), id, size, &gfp_flags, ptr_id(ctor),
+		ptr_id(ctor_data), ptr_id(ret));
+	return ret;
+}
+
+static void shadow_free(void *obj, unsigned long id, klp_shadow_dtor_t dtor)
+{
+	klp_shadow_free(obj, id, dtor);
+	pr_info("klp_%s(obj=PTR%d, id=0x%lx, dtor=PTR%d)\n",
+		__func__, ptr_id(obj), id, ptr_id(dtor));
+}
+
+static void shadow_free_all(unsigned long id, klp_shadow_dtor_t dtor)
+{
+	klp_shadow_free_all(id, dtor);
+	pr_info("klp_%s(id=0x%lx, dtor=PTR%d)\n",
+		__func__, id, ptr_id(dtor));
+}
+
+
+/* Shadow variable constructor - remember simple pointer data */
+static int shadow_ctor(void *obj, void *shadow_data, void *ctor_data)
+{
+	int **shadow_int = shadow_data;
+	*shadow_int = ctor_data;
+	pr_info("%s: PTR%d -> PTR%d\n",
+		__func__, ptr_id(shadow_int), ptr_id(ctor_data));
+
+	return 0;
+}
+
+static void shadow_dtor(void *obj, void *shadow_data)
+{
+	pr_info("%s(obj=PTR%d, shadow_data=PTR%d)\n",
+		__func__, ptr_id(obj), ptr_id(shadow_data));
+}
+
+static int test_klp_shadow_vars_init(void)
+{
+	void *obj			= THIS_MODULE;
+	int id			= 0x1234;
+	size_t size		= sizeof(int *);
+	gfp_t gfp_flags		= GFP_KERNEL;
+
+	int var1, var2, var3, var4;
+	int **sv1, **sv2, **sv3, **sv4;
+
+	void *ret;
+
+	ptr_id(NULL);
+	ptr_id(&var1);
+	ptr_id(&var2);
+	ptr_id(&var3);
+	ptr_id(&var4);
+
+	/*
+	 * With an empty shadow variable hash table, expect not to find
+	 * any matches.
+	 */
+	ret = shadow_get(obj, id);
+	if (!ret)
+		pr_info("  got expected NULL result\n");
+
+	/*
+	 * Allocate a few shadow variables with different <obj> and <id>.
+	 */
+	sv1 = shadow_alloc(obj, id, size, gfp_flags, shadow_ctor, &var1);
+	sv2 = shadow_alloc(obj + 1, id, size, gfp_flags, shadow_ctor, &var2);
+	sv3 = shadow_alloc(obj, id + 1, size, gfp_flags, shadow_ctor, &var3);
+
+	/*
+	 * Verify we can find our new shadow variables and that they point
+	 * to expected data.
+	 */
+	ret = shadow_get(obj, id);
+	if (ret == sv1 && *sv1 == &var1)
+		pr_info("  got expected PTR%d -> PTR%d result\n",
+			ptr_id(sv1), ptr_id(*sv1));
+	ret = shadow_get(obj + 1, id);
+	if (ret == sv2 && *sv2 == &var2)
+		pr_info("  got expected PTR%d -> PTR%d result\n",
+			ptr_id(sv2), ptr_id(*sv2));
+	ret = shadow_get(obj, id + 1);
+	if (ret == sv3 && *sv3 == &var3)
+		pr_info("  got expected PTR%d -> PTR%d result\n",
+			ptr_id(sv3), ptr_id(*sv3));
+
+	/*
+	 * Allocate or get a few more, this time with the same <obj>, <id>.
+	 * The second invocation should return the same shadow var.
+	 */
+	sv4 = shadow_get_or_alloc(obj + 2, id, size, gfp_flags, shadow_ctor, &var4);
+	ret = shadow_get_or_alloc(obj + 2, id, size, gfp_flags, shadow_ctor, &var4);
+	if (ret == sv4 && *sv4 == &var4)
+		pr_info("  got expected PTR%d -> PTR%d result\n",
+			ptr_id(sv4), ptr_id(*sv4));
+
+	/*
+	 * Free the <obj=*, id> shadow variables and check that we can no
+	 * longer find them.
+	 */
+	shadow_free(obj, id, shadow_dtor);			/* sv1 */
+	ret = shadow_get(obj, id);
+	if (!ret)
+		pr_info("  got expected NULL result\n");
+
+	shadow_free(obj + 1, id, shadow_dtor);			/* sv2 */
+	ret = shadow_get(obj + 1, id);
+	if (!ret)
+		pr_info("  got expected NULL result\n");
+
+	shadow_free(obj + 2, id, shadow_dtor);			/* sv4 */
+	ret = shadow_get(obj + 2, id);
+	if (!ret)
+		pr_info("  got expected NULL result\n");
+
+	/*
+	 * We should still find an <id+1> variable.
+	 */
+	ret = shadow_get(obj, id + 1);
+	if (ret == sv3 && *sv3 == &var3)
+		pr_info("  got expected PTR%d -> PTR%d result\n",
+			ptr_id(sv3), ptr_id(*sv3));
+
+	/*
+	 * Free all the <id+1> variables, too.
+	 */
+	shadow_free_all(id + 1, shadow_dtor);			/* sv3 */
+	ret = shadow_get(obj, id);
+	if (!ret)
+		pr_info("  shadow_get() got expected NULL result\n");
+
+
+	free_ptr_list();
+
+	return 0;
+}
+
+static void test_klp_shadow_vars_exit(void)
+{
+}
+
+module_init(test_klp_shadow_vars_init);
+module_exit(test_klp_shadow_vars_exit);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
+MODULE_DESCRIPTION("Livepatch test: shadow variables");
diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
index 1a2bd15c5b6e..a8ba662da827 100644
--- a/tools/testing/selftests/Makefile
+++ b/tools/testing/selftests/Makefile
@@ -21,6 +21,7 @@ TARGETS += ir
 TARGETS += kcmp
 TARGETS += kvm
 TARGETS += lib
+TARGETS += livepatch
 TARGETS += membarrier
 TARGETS += memfd
 TARGETS += memory-hotplug
diff --git a/tools/testing/selftests/livepatch/Makefile b/tools/testing/selftests/livepatch/Makefile
new file mode 100644
index 000000000000..af4aee79bebb
--- /dev/null
+++ b/tools/testing/selftests/livepatch/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0
+
+TEST_GEN_PROGS := \
+	test-livepatch.sh \
+	test-callbacks.sh \
+	test-shadow-vars.sh
+
+include ../lib.mk
diff --git a/tools/testing/selftests/livepatch/README b/tools/testing/selftests/livepatch/README
new file mode 100644
index 000000000000..b73cd0e2dd51
--- /dev/null
+++ b/tools/testing/selftests/livepatch/README
@@ -0,0 +1,43 @@
+====================
+Livepatch Self Tests
+====================
+
+This is a small set of sanity tests for the kernel livepatching.
+
+The test suite loads and unloads several test kernel modules to verify
+livepatch behavior.  Debug information is logged to the kernel's message
+buffer and parsed for expected messages.  (Note: the tests will clear
+the message buffer between individual tests.)
+
+
+Config
+------
+
+Set these config options and their prerequisites:
+
+CONFIG_LIVEPATCH=y
+CONFIG_TEST_LIVEPATCH=m
+
+
+Running the tests
+-----------------
+
+Test kernel modules are built as part of lib/ (make modules) and need to
+be installed (make modules_install) as the test scripts will modprobe
+them.
+
+To run the livepatch selftests, from the top of the kernel source tree:
+
+  % make -C tools/testing/selftests TARGETS=livepatch run_tests
+
+
+Adding tests
+------------
+
+See the common functions.sh file for the existing collection of utility
+functions, most importantly set_dynamic_debug() and check_result().  The
+latter function greps the kernel's ring buffer for "livepatch:" and
+"test_klp" strings, so tests be sure to include one of those strings for
+result comparison.  Other utility functions include general module
+loading and livepatch loading helpers (waiting for patch transitions,
+sysfs entries, etc.)
diff --git a/tools/testing/selftests/livepatch/config b/tools/testing/selftests/livepatch/config
new file mode 100644
index 000000000000..0dd7700464a8
--- /dev/null
+++ b/tools/testing/selftests/livepatch/config
@@ -0,0 +1 @@
+CONFIG_TEST_LIVEPATCH=m
diff --git a/tools/testing/selftests/livepatch/functions.sh b/tools/testing/selftests/livepatch/functions.sh
new file mode 100644
index 000000000000..c7b9fb45d7c9
--- /dev/null
+++ b/tools/testing/selftests/livepatch/functions.sh
@@ -0,0 +1,203 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+# Shell functions for the rest of the scripts.
+
+MAX_RETRIES=600
+RETRY_INTERVAL=".1"	# seconds
+
+# log(msg) - write message to kernel log
+#	msg - insightful words
+function log() {
+	echo "$1" > /dev/kmsg
+}
+
+# die(msg) - game over, man
+#	msg - dying words
+function die() {
+	log "ERROR: $1"
+	echo "ERROR: $1" >&2
+	exit 1
+}
+
+# set_dynamic_debug() - setup kernel dynamic debug
+#	TODO - push and pop this config?
+function set_dynamic_debug() {
+	cat << EOF > /sys/kernel/debug/dynamic_debug/control
+file kernel/livepatch/* +p
+func klp_try_switch_task -p
+EOF
+}
+
+# loop_until(cmd) - loop a command until it is successful or $MAX_RETRIES,
+#		    sleep $RETRY_INTERVAL between attempts
+#	cmd - command and its arguments to run
+function loop_until() {
+	local cmd="$*"
+	local i=0
+	while true; do
+		eval "$cmd" && return 0
+		[[ $((i++)) -eq $MAX_RETRIES ]] && return 1
+		sleep $RETRY_INTERVAL
+	done
+}
+
+function is_livepatch_mod() {
+	local mod="$1"
+
+	if [[ $(modinfo "$mod" | awk '/^livepatch:/{print $NF}') == "Y" ]]; then
+		return 0
+	fi
+
+	return 1
+}
+
+function __load_mod() {
+	local mod="$1"; shift
+	local args="$*"
+
+	local msg="% modprobe $mod $args"
+	log "${msg%% }"
+	ret=$(modprobe "$mod" "$args" 2>&1)
+	if [[ "$ret" != "" ]]; then
+		die "$ret"
+	fi
+
+	# Wait for module in sysfs ...
+	loop_until '[[ -e "/sys/module/$mod" ]]' ||
+		die "failed to load module $mod"
+}
+
+
+# load_mod(modname, params) - load a kernel module
+#	modname - module name to load
+#	params  - module parameters to pass to modprobe
+function load_mod() {
+	local mod="$1"; shift
+	local args="$*"
+
+	is_livepatch_mod "$mod" &&
+		die "use load_lp() to load the livepatch module $mod"
+
+	__load_mod "$mod" "$args"
+}
+
+# load_lp_nowait(modname, params) - load a kernel module with a livepatch
+#			but do not wait on until the transition finishes
+#	modname - module name to load
+#	params  - module parameters to pass to modprobe
+function load_lp_nowait() {
+	local mod="$1"; shift
+	local args="$*"
+
+	is_livepatch_mod "$mod" ||
+		die "module $mod is not a livepatch"
+
+	__load_mod "$mod" "$args"
+
+	# Wait for livepatch in sysfs ...
+	loop_until '[[ -e "/sys/kernel/livepatch/$mod" ]]' ||
+		die "failed to load module $mod (sysfs)"
+}
+
+# load_lp(modname, params) - load a kernel module with a livepatch
+#	modname - module name to load
+#	params  - module parameters to pass to modprobe
+function load_lp() {
+	local mod="$1"; shift
+	local args="$*"
+
+	load_lp_nowait "$mod" "$args"
+
+	# Wait until the transition finishes ...
+	loop_until 'grep -q '^0$' /sys/kernel/livepatch/$mod/transition' ||
+		die "failed to complete transition"
+}
+
+# load_failing_mod(modname, params) - load a kernel module, expect to fail
+#	modname - module name to load
+#	params  - module parameters to pass to modprobe
+function load_failing_mod() {
+	local mod="$1"; shift
+	local args="$*"
+
+	local msg="% modprobe $mod $args"
+	log "${msg%% }"
+	ret=$(modprobe "$mod" "$args" 2>&1)
+	if [[ "$ret" == "" ]]; then
+		die "$mod unexpectedly loaded"
+	fi
+	log "$ret"
+}
+
+# unload_mod(modname) - unload a kernel module
+#	modname - module name to unload
+function unload_mod() {
+	local mod="$1"
+
+	# Wait for module reference count to clear ...
+	loop_until '[[ $(cat "/sys/module/$mod/refcnt") == "0" ]]' ||
+		die "failed to unload module $mod (refcnt)"
+
+	log "% rmmod $mod"
+	ret=$(rmmod "$mod" 2>&1)
+	if [[ "$ret" != "" ]]; then
+		die "$ret"
+	fi
+
+	# Wait for module in sysfs ...
+	loop_until '[[ ! -e "/sys/module/$mod" ]]' ||
+		die "failed to unload module $mod (/sys/module)"
+}
+
+# unload_lp(modname) - unload a kernel module with a livepatch
+#	modname - module name to unload
+function unload_lp() {
+	unload_mod "$1"
+}
+
+# disable_lp(modname) - disable a livepatch
+#	modname - module name to unload
+function disable_lp() {
+	local mod="$1"
+
+	log "% echo 0 > /sys/kernel/livepatch/$mod/enabled"
+	echo 0 > /sys/kernel/livepatch/"$mod"/enabled
+
+	# Wait until the transition finishes and the livepatch gets
+	# removed from sysfs...
+	loop_until '[[ ! -e "/sys/kernel/livepatch/$mod" ]]' ||
+		die "failed to disable livepatch $mod"
+}
+
+# set_pre_patch_ret(modname, pre_patch_ret)
+#	modname - module name to set
+#	pre_patch_ret - new pre_patch_ret value
+function set_pre_patch_ret {
+	local mod="$1"; shift
+	local ret="$1"
+
+	log "% echo $ret > /sys/module/$mod/parameters/pre_patch_ret"
+	echo "$ret" > /sys/module/"$mod"/parameters/pre_patch_ret
+
+	# Wait for sysfs value to hold ...
+	loop_until '[[ $(cat "/sys/module/$mod/parameters/pre_patch_ret") == "$ret" ]]' ||
+		die "failed to set pre_patch_ret parameter for $mod module"
+}
+
+# check_result() - verify dmesg output
+#	TODO - better filter, out of order msgs, etc?
+function check_result {
+	local expect="$*"
+	local result
+
+	result=$(dmesg | grep -v 'tainting' | grep -e 'livepatch:' -e 'test_klp' | sed 's/^\[[ 0-9.]*\] //')
+
+	if [[ "$expect" == "$result" ]] ; then
+		echo "ok"
+	else
+		echo -e "not ok\n\n$(diff -upr --label expected --label result <(echo "$expect") <(echo "$result"))\n"
+		die "livepatch kselftest(s) failed"
+	fi
+}
diff --git a/tools/testing/selftests/livepatch/test-callbacks.sh b/tools/testing/selftests/livepatch/test-callbacks.sh
new file mode 100755
index 000000000000..e97a9dcb73c7
--- /dev/null
+++ b/tools/testing/selftests/livepatch/test-callbacks.sh
@@ -0,0 +1,587 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+. $(dirname $0)/functions.sh
+
+MOD_LIVEPATCH=test_klp_callbacks_demo
+MOD_LIVEPATCH2=test_klp_callbacks_demo2
+MOD_TARGET=test_klp_callbacks_mod
+MOD_TARGET_BUSY=test_klp_callbacks_busy
+
+set_dynamic_debug
+
+
+# TEST: target module before livepatch
+#
+# Test a combination of loading a kernel module and a livepatch that
+# patches a function in the first module.  Load the target module
+# before the livepatch module.  Unload them in the same order.
+#
+# - On livepatch enable, before the livepatch transition starts,
+#   pre-patch callbacks are executed for vmlinux and $MOD_TARGET (those
+#   klp_objects currently loaded).  After klp_objects are patched
+#   according to the klp_patch, their post-patch callbacks run and the
+#   transition completes.
+#
+# - Similarly, on livepatch disable, pre-patch callbacks run before the
+#   unpatching transition starts.  klp_objects are reverted, post-patch
+#   callbacks execute and the transition completes.
+
+echo -n "TEST: target module before livepatch ... "
+dmesg -C
+
+load_mod $MOD_TARGET
+load_lp $MOD_LIVEPATCH
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+unload_mod $MOD_TARGET
+
+check_result "% modprobe $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_init
+% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+$MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': patching complete
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+$MOD_LIVEPATCH: pre_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit"
+
+
+# TEST: module_coming notifier
+#
+# This test is similar to the previous test, but (un)load the livepatch
+# module before the target kernel module.  This tests the livepatch
+# core's module_coming handler.
+#
+# - On livepatch enable, only pre/post-patch callbacks are executed for
+#   currently loaded klp_objects, in this case, vmlinux.
+#
+# - When a targeted module is subsequently loaded, only its
+#   pre/post-patch callbacks are executed.
+#
+# - On livepatch disable, all currently loaded klp_objects' (vmlinux and
+#   $MOD_TARGET) pre/post-unpatch callbacks are executed.
+
+echo -n "TEST: module_coming notifier ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+load_mod $MOD_TARGET
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+unload_mod $MOD_TARGET
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% modprobe $MOD_TARGET
+livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET'
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_TARGET: ${MOD_TARGET}_init
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+$MOD_LIVEPATCH: pre_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit"
+
+
+# TEST: module_going notifier
+#
+# Test loading the livepatch after a targeted kernel module, then unload
+# the kernel module before disabling the livepatch.  This tests the
+# livepatch core's module_going handler.
+#
+# - First load a target module, then the livepatch.
+#
+# - When a target module is unloaded, the livepatch is only reverted
+#   from that klp_object ($MOD_TARGET).  As such, only its pre and
+#   post-unpatch callbacks are executed when this occurs.
+#
+# - When the livepatch is disabled, pre and post-unpatch callbacks are
+#   run for the remaining klp_object, vmlinux.
+
+echo -n "TEST: module_going notifier ... "
+dmesg -C
+
+load_mod $MOD_TARGET
+load_lp $MOD_LIVEPATCH
+unload_mod $MOD_TARGET
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+
+check_result "% modprobe $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_init
+% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+$MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': patching complete
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit
+$MOD_LIVEPATCH: pre_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+livepatch: reverting patch '$MOD_LIVEPATCH' on unloading module '$MOD_TARGET'
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: module_coming and module_going notifiers
+#
+# This test is similar to the previous test, however the livepatch is
+# loaded first.  This tests the livepatch core's module_coming and
+# module_going handlers.
+#
+# - First load the livepatch.
+#
+# - When a targeted kernel module is subsequently loaded, only its
+#   pre/post-patch callbacks are executed.
+#
+# - When the target module is unloaded, the livepatch is only reverted
+#   from the $MOD_TARGET klp_object.  As such, only pre and
+#   post-unpatch callbacks are executed when this occurs.
+
+echo -n "TEST: module_coming and module_going notifiers ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+load_mod $MOD_TARGET
+unload_mod $MOD_TARGET
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% modprobe $MOD_TARGET
+livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET'
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_TARGET: ${MOD_TARGET}_init
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit
+$MOD_LIVEPATCH: pre_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+livepatch: reverting patch '$MOD_LIVEPATCH' on unloading module '$MOD_TARGET'
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: target module not present
+#
+# A simple test of loading a livepatch without one of its patch target
+# klp_objects ever loaded ($MOD_TARGET).
+#
+# - Load the livepatch.
+#
+# - As expected, only pre/post-(un)patch handlers are executed for
+#   vmlinux.
+
+echo -n "TEST: target module not present ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: pre-patch callback -ENODEV
+#
+# Test a scenario where a vmlinux pre-patch callback returns a non-zero
+# status (ie, failure).
+#
+# - First load a target module.
+#
+# - Load the livepatch module, setting its 'pre_patch_ret' value to -19
+#   (-ENODEV).  When its vmlinux pre-patch callback executes, this
+#   status code will propagate back to the module-loading subsystem.
+#   The result is that the insmod command refuses to load the livepatch
+#   module.
+
+echo -n "TEST: pre-patch callback -ENODEV ... "
+dmesg -C
+
+load_mod $MOD_TARGET
+load_failing_mod $MOD_LIVEPATCH pre_patch_ret=-19
+unload_mod $MOD_TARGET
+
+check_result "% modprobe $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_init
+% modprobe $MOD_LIVEPATCH pre_patch_ret=-19
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+test_klp_callbacks_demo: pre_patch_callback: vmlinux
+livepatch: pre-patch callback failed for object 'vmlinux'
+livepatch: failed to enable patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': canceling patching transition, going to unpatch
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+modprobe: ERROR: could not insert '$MOD_LIVEPATCH': No such device
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit"
+
+
+# TEST: module_coming + pre-patch callback -ENODEV
+#
+# Similar to the previous test, setup a livepatch such that its vmlinux
+# pre-patch callback returns success.  However, when a targeted kernel
+# module is later loaded, have the livepatch return a failing status
+# code.
+#
+# - Load the livepatch, vmlinux pre-patch callback succeeds.
+#
+# - Set a trap so subsequent pre-patch callbacks to this livepatch will
+#   return -ENODEV.
+#
+# - The livepatch pre-patch callback for subsequently loaded target
+#   modules will return failure, so the module loader refuses to load
+#   the kernel module.  No post-patch or pre/post-unpatch callbacks are
+#   executed for this klp_object.
+#
+# - Pre/post-unpatch callbacks are run for the vmlinux klp_object.
+
+echo -n "TEST: module_coming + pre-patch callback -ENODEV ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+set_pre_patch_ret $MOD_LIVEPATCH -19
+load_failing_mod $MOD_TARGET
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% echo -19 > /sys/module/$MOD_LIVEPATCH/parameters/pre_patch_ret
+% modprobe $MOD_TARGET
+livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET'
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+livepatch: pre-patch callback failed for object '$MOD_TARGET'
+livepatch: patch '$MOD_LIVEPATCH' failed for module '$MOD_TARGET', refusing to load module '$MOD_TARGET'
+modprobe: ERROR: could not insert '$MOD_TARGET': No such device
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: multiple target modules
+#
+# Test loading multiple targeted kernel modules.  This test-case is
+# mainly for comparing with the next test-case.
+#
+# - Load a target "busy" kernel module which kicks off a worker function
+#   that immediately exits.
+#
+# - Proceed with loading the livepatch and another ordinary target
+#   module.  Post-patch callbacks are executed and the transition
+#   completes quickly.
+
+echo -n "TEST: multiple target modules ... "
+dmesg -C
+
+load_mod $MOD_TARGET_BUSY sleep_secs=0
+# give $MOD_TARGET_BUSY::busymod_work_func() a chance to run
+sleep 5
+load_lp $MOD_LIVEPATCH
+load_mod $MOD_TARGET
+unload_mod $MOD_TARGET
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+unload_mod $MOD_TARGET_BUSY
+
+check_result "% modprobe $MOD_TARGET_BUSY sleep_secs=0
+$MOD_TARGET_BUSY: ${MOD_TARGET_BUSY}_init
+$MOD_TARGET_BUSY: busymod_work_func, sleeping 0 seconds ...
+$MOD_TARGET_BUSY: busymod_work_func exit
+% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+$MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': patching complete
+% modprobe $MOD_TARGET
+livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET'
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_LIVEPATCH: post_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_TARGET: ${MOD_TARGET}_init
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit
+$MOD_LIVEPATCH: pre_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+livepatch: reverting patch '$MOD_LIVEPATCH' on unloading module '$MOD_TARGET'
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+$MOD_LIVEPATCH: pre_unpatch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH
+% rmmod $MOD_TARGET_BUSY
+$MOD_TARGET_BUSY: ${MOD_TARGET_BUSY}_exit"
+
+
+
+# TEST: busy target module
+#
+# A similar test as the previous one, but force the "busy" kernel module
+# to do longer work.
+#
+# The livepatching core will refuse to patch a task that is currently
+# executing a to-be-patched function -- the consistency model stalls the
+# current patch transition until this safety-check is met.  Test a
+# scenario where one of a livepatch's target klp_objects sits on such a
+# function for a long time.  Meanwhile, load and unload other target
+# kernel modules while the livepatch transition is in progress.
+#
+# - Load the "busy" kernel module, this time make it do 10 seconds worth
+#   of work.
+#
+# - Meanwhile, the livepatch is loaded.  Notice that the patch
+#   transition does not complete as the targeted "busy" module is
+#   sitting on a to-be-patched function.
+#
+# - Load a second target module (this one is an ordinary idle kernel
+#   module).  Note that *no* post-patch callbacks will be executed while
+#   the livepatch is still in transition.
+#
+# - Request an unload of the simple kernel module.  The patch is still
+#   transitioning, so its pre-unpatch callbacks are skipped.
+#
+# - Finally the livepatch is disabled.  Since none of the patch's
+#   klp_object's post-patch callbacks executed, the remaining
+#   klp_object's pre-unpatch callbacks are skipped.
+
+echo -n "TEST: busy target module ... "
+dmesg -C
+
+load_mod $MOD_TARGET_BUSY sleep_secs=10
+load_lp_nowait $MOD_LIVEPATCH
+# Don't wait for transition, load $MOD_TARGET while the transition
+# is still stalled in $MOD_TARGET_BUSY::busymod_work_func()
+sleep 5
+load_mod $MOD_TARGET
+unload_mod $MOD_TARGET
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+unload_mod $MOD_TARGET_BUSY
+
+check_result "% modprobe $MOD_TARGET_BUSY sleep_secs=10
+$MOD_TARGET_BUSY: ${MOD_TARGET_BUSY}_init
+$MOD_TARGET_BUSY: busymod_work_func, sleeping 10 seconds ...
+% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+% modprobe $MOD_TARGET
+livepatch: applying patch '$MOD_LIVEPATCH' to loading module '$MOD_TARGET'
+$MOD_LIVEPATCH: pre_patch_callback: $MOD_TARGET -> [MODULE_STATE_COMING] Full formed, running module_init
+$MOD_TARGET: ${MOD_TARGET}_init
+% rmmod $MOD_TARGET
+$MOD_TARGET: ${MOD_TARGET}_exit
+livepatch: reverting patch '$MOD_LIVEPATCH' on unloading module '$MOD_TARGET'
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET -> [MODULE_STATE_GOING] Going away
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': reversing transition from patching to unpatching
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+$MOD_LIVEPATCH: post_unpatch_callback: $MOD_TARGET_BUSY -> [MODULE_STATE_LIVE] Normal state
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH
+% rmmod $MOD_TARGET_BUSY
+$MOD_TARGET_BUSY: busymod_work_func exit
+$MOD_TARGET_BUSY: ${MOD_TARGET_BUSY}_exit"
+
+
+# TEST: multiple livepatches
+#
+# Test loading multiple livepatches.  This test-case is mainly for comparing
+# with the next test-case.
+#
+# - Load and unload two livepatches, pre and post (un)patch callbacks
+#   execute as each patch progresses through its (un)patching
+#   transition.
+
+echo -n "TEST: multiple livepatches ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+load_lp $MOD_LIVEPATCH2
+disable_lp $MOD_LIVEPATCH2
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH2
+unload_lp $MOD_LIVEPATCH
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% modprobe $MOD_LIVEPATCH2
+livepatch: enabling patch '$MOD_LIVEPATCH2'
+livepatch: '$MOD_LIVEPATCH2': initializing patching transition
+$MOD_LIVEPATCH2: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': starting patching transition
+livepatch: '$MOD_LIVEPATCH2': completing patching transition
+$MOD_LIVEPATCH2: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': patching complete
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH2/enabled
+livepatch: '$MOD_LIVEPATCH2': initializing unpatching transition
+$MOD_LIVEPATCH2: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH2': completing unpatching transition
+$MOD_LIVEPATCH2: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': unpatching complete
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+$MOD_LIVEPATCH: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+$MOD_LIVEPATCH: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH2
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: atomic replace
+#
+# Load multiple livepatches, but the second as an 'atomic-replace'
+# patch.  When the latter loads, the original livepatch should be
+# disabled and *none* of its pre/post-unpatch callbacks executed.  On
+# the other hand, when the atomic-replace livepatch is disabled, its
+# pre/post-unpatch callbacks *should* be executed.
+#
+# - Load and unload two livepatches, the second of which has its
+#   .replace flag set true.
+#
+# - Pre and post patch callbacks are executed for both livepatches.
+#
+# - Once the atomic replace module is loaded, only its pre and post
+#   unpatch callbacks are executed.
+
+echo -n "TEST: atomic replace ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+load_lp $MOD_LIVEPATCH2 replace=1
+disable_lp $MOD_LIVEPATCH2
+unload_lp $MOD_LIVEPATCH2
+unload_lp $MOD_LIVEPATCH
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+$MOD_LIVEPATCH: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+$MOD_LIVEPATCH: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH': patching complete
+% modprobe $MOD_LIVEPATCH2 replace=1
+livepatch: enabling patch '$MOD_LIVEPATCH2'
+livepatch: '$MOD_LIVEPATCH2': initializing patching transition
+$MOD_LIVEPATCH2: pre_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': starting patching transition
+livepatch: '$MOD_LIVEPATCH2': completing patching transition
+$MOD_LIVEPATCH2: post_patch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': patching complete
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH2/enabled
+livepatch: '$MOD_LIVEPATCH2': initializing unpatching transition
+$MOD_LIVEPATCH2: pre_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH2': completing unpatching transition
+$MOD_LIVEPATCH2: post_unpatch_callback: vmlinux
+livepatch: '$MOD_LIVEPATCH2': unpatching complete
+% rmmod $MOD_LIVEPATCH2
+% rmmod $MOD_LIVEPATCH"
+
+
+exit 0
diff --git a/tools/testing/selftests/livepatch/test-livepatch.sh b/tools/testing/selftests/livepatch/test-livepatch.sh
new file mode 100755
index 000000000000..f05268aea859
--- /dev/null
+++ b/tools/testing/selftests/livepatch/test-livepatch.sh
@@ -0,0 +1,168 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+. $(dirname $0)/functions.sh
+
+MOD_LIVEPATCH=test_klp_livepatch
+MOD_REPLACE=test_klp_atomic_replace
+
+set_dynamic_debug
+
+
+# TEST: basic function patching
+# - load a livepatch that modifies the output from /proc/cmdline and
+#   verify correct behavior
+# - unload the livepatch and make sure the patch was removed
+
+echo -n "TEST: basic function patching ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+
+if [[ "$(cat /proc/cmdline)" != "$MOD_LIVEPATCH: this has been live patched" ]] ; then
+	echo -e "FAIL\n\n"
+	die "livepatch kselftest(s) failed"
+fi
+
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+
+if [[ "$(cat /proc/cmdline)" == "$MOD_LIVEPATCH: this has been live patched" ]] ; then
+	echo -e "FAIL\n\n"
+	die "livepatch kselftest(s) failed"
+fi
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+livepatch: '$MOD_LIVEPATCH': patching complete
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: multiple livepatches
+# - load a livepatch that modifies the output from /proc/cmdline and
+#   verify correct behavior
+# - load another livepatch and verify that both livepatches are active
+# - unload the second livepatch and verify that the first is still active
+# - unload the first livepatch and verify none are active
+
+echo -n "TEST: multiple livepatches ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+load_lp $MOD_REPLACE replace=0
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+disable_lp $MOD_REPLACE
+unload_lp $MOD_REPLACE
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+disable_lp $MOD_LIVEPATCH
+unload_lp $MOD_LIVEPATCH
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+livepatch: '$MOD_LIVEPATCH': patching complete
+$MOD_LIVEPATCH: this has been live patched
+% modprobe $MOD_REPLACE replace=0
+livepatch: enabling patch '$MOD_REPLACE'
+livepatch: '$MOD_REPLACE': initializing patching transition
+livepatch: '$MOD_REPLACE': starting patching transition
+livepatch: '$MOD_REPLACE': completing patching transition
+livepatch: '$MOD_REPLACE': patching complete
+$MOD_LIVEPATCH: this has been live patched
+$MOD_REPLACE: this has been live patched
+% echo 0 > /sys/kernel/livepatch/$MOD_REPLACE/enabled
+livepatch: '$MOD_REPLACE': initializing unpatching transition
+livepatch: '$MOD_REPLACE': starting unpatching transition
+livepatch: '$MOD_REPLACE': completing unpatching transition
+livepatch: '$MOD_REPLACE': unpatching complete
+% rmmod $MOD_REPLACE
+$MOD_LIVEPATCH: this has been live patched
+% echo 0 > /sys/kernel/livepatch/$MOD_LIVEPATCH/enabled
+livepatch: '$MOD_LIVEPATCH': initializing unpatching transition
+livepatch: '$MOD_LIVEPATCH': starting unpatching transition
+livepatch: '$MOD_LIVEPATCH': completing unpatching transition
+livepatch: '$MOD_LIVEPATCH': unpatching complete
+% rmmod $MOD_LIVEPATCH"
+
+
+# TEST: atomic replace livepatch
+# - load a livepatch that modifies the output from /proc/cmdline and
+#   verify correct behavior
+# - load an atomic replace livepatch and verify that only the second is active
+# - remove the first livepatch and verify that the atomic replace livepatch
+#   is still active
+# - remove the atomic replace livepatch and verify that none are active
+
+echo -n "TEST: atomic replace livepatch ... "
+dmesg -C
+
+load_lp $MOD_LIVEPATCH
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+load_lp $MOD_REPLACE replace=1
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+unload_lp $MOD_LIVEPATCH
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+disable_lp $MOD_REPLACE
+unload_lp $MOD_REPLACE
+
+grep 'live patched' /proc/cmdline > /dev/kmsg
+grep 'live patched' /proc/meminfo > /dev/kmsg
+
+check_result "% modprobe $MOD_LIVEPATCH
+livepatch: enabling patch '$MOD_LIVEPATCH'
+livepatch: '$MOD_LIVEPATCH': initializing patching transition
+livepatch: '$MOD_LIVEPATCH': starting patching transition
+livepatch: '$MOD_LIVEPATCH': completing patching transition
+livepatch: '$MOD_LIVEPATCH': patching complete
+$MOD_LIVEPATCH: this has been live patched
+% modprobe $MOD_REPLACE replace=1
+livepatch: enabling patch '$MOD_REPLACE'
+livepatch: '$MOD_REPLACE': initializing patching transition
+livepatch: '$MOD_REPLACE': starting patching transition
+livepatch: '$MOD_REPLACE': completing patching transition
+livepatch: '$MOD_REPLACE': patching complete
+$MOD_REPLACE: this has been live patched
+% rmmod $MOD_LIVEPATCH
+$MOD_REPLACE: this has been live patched
+% echo 0 > /sys/kernel/livepatch/$MOD_REPLACE/enabled
+livepatch: '$MOD_REPLACE': initializing unpatching transition
+livepatch: '$MOD_REPLACE': starting unpatching transition
+livepatch: '$MOD_REPLACE': completing unpatching transition
+livepatch: '$MOD_REPLACE': unpatching complete
+% rmmod $MOD_REPLACE"
+
+
+exit 0
diff --git a/tools/testing/selftests/livepatch/test-shadow-vars.sh b/tools/testing/selftests/livepatch/test-shadow-vars.sh
new file mode 100755
index 000000000000..04a37831e204
--- /dev/null
+++ b/tools/testing/selftests/livepatch/test-shadow-vars.sh
@@ -0,0 +1,60 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2018 Joe Lawrence <joe.lawrence@redhat.com>
+
+. $(dirname $0)/functions.sh
+
+MOD_TEST=test_klp_shadow_vars
+
+set_dynamic_debug
+
+
+# TEST: basic shadow variable API
+# - load a module that exercises the shadow variable API
+
+echo -n "TEST: basic shadow variable API ... "
+dmesg -C
+
+load_mod $MOD_TEST
+unload_mod $MOD_TEST
+
+check_result "% modprobe $MOD_TEST
+$MOD_TEST: klp_shadow_get(obj=PTR5, id=0x1234) = PTR0
+$MOD_TEST:   got expected NULL result
+$MOD_TEST: shadow_ctor: PTR6 -> PTR1
+$MOD_TEST: klp_shadow_alloc(obj=PTR5, id=0x1234, size=8, gfp_flags=GFP_KERNEL), ctor=PTR7, ctor_data=PTR1 = PTR6
+$MOD_TEST: shadow_ctor: PTR8 -> PTR2
+$MOD_TEST: klp_shadow_alloc(obj=PTR9, id=0x1234, size=8, gfp_flags=GFP_KERNEL), ctor=PTR7, ctor_data=PTR2 = PTR8
+$MOD_TEST: shadow_ctor: PTR10 -> PTR3
+$MOD_TEST: klp_shadow_alloc(obj=PTR5, id=0x1235, size=8, gfp_flags=GFP_KERNEL), ctor=PTR7, ctor_data=PTR3 = PTR10
+$MOD_TEST: klp_shadow_get(obj=PTR5, id=0x1234) = PTR6
+$MOD_TEST:   got expected PTR6 -> PTR1 result
+$MOD_TEST: klp_shadow_get(obj=PTR9, id=0x1234) = PTR8
+$MOD_TEST:   got expected PTR8 -> PTR2 result
+$MOD_TEST: klp_shadow_get(obj=PTR5, id=0x1235) = PTR10
+$MOD_TEST:   got expected PTR10 -> PTR3 result
+$MOD_TEST: shadow_ctor: PTR11 -> PTR4
+$MOD_TEST: klp_shadow_get_or_alloc(obj=PTR12, id=0x1234, size=8, gfp_flags=GFP_KERNEL), ctor=PTR7, ctor_data=PTR4 = PTR11
+$MOD_TEST: klp_shadow_get_or_alloc(obj=PTR12, id=0x1234, size=8, gfp_flags=GFP_KERNEL), ctor=PTR7, ctor_data=PTR4 = PTR11
+$MOD_TEST:   got expected PTR11 -> PTR4 result
+$MOD_TEST: shadow_dtor(obj=PTR5, shadow_data=PTR6)
+$MOD_TEST: klp_shadow_free(obj=PTR5, id=0x1234, dtor=PTR13)
+$MOD_TEST: klp_shadow_get(obj=PTR5, id=0x1234) = PTR0
+$MOD_TEST:   got expected NULL result
+$MOD_TEST: shadow_dtor(obj=PTR9, shadow_data=PTR8)
+$MOD_TEST: klp_shadow_free(obj=PTR9, id=0x1234, dtor=PTR13)
+$MOD_TEST: klp_shadow_get(obj=PTR9, id=0x1234) = PTR0
+$MOD_TEST:   got expected NULL result
+$MOD_TEST: shadow_dtor(obj=PTR12, shadow_data=PTR11)
+$MOD_TEST: klp_shadow_free(obj=PTR12, id=0x1234, dtor=PTR13)
+$MOD_TEST: klp_shadow_get(obj=PTR12, id=0x1234) = PTR0
+$MOD_TEST:   got expected NULL result
+$MOD_TEST: klp_shadow_get(obj=PTR5, id=0x1235) = PTR10
+$MOD_TEST:   got expected PTR10 -> PTR3 result
+$MOD_TEST: shadow_dtor(obj=PTR5, shadow_data=PTR10)
+$MOD_TEST: klp_shadow_free_all(id=0x1235, dtor=PTR13)
+$MOD_TEST: klp_shadow_get(obj=PTR5, id=0x1234) = PTR0
+$MOD_TEST:   shadow_get() got expected NULL result
+% rmmod test_klp_shadow_vars"
+
+exit 0
-- 
2.13.7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v15 03/11] livepatch: Consolidate klp_free functions
  2019-01-09 12:43 ` [PATCH v15 03/11] livepatch: Consolidate klp_free functions Petr Mladek
@ 2019-01-09 14:35   ` Miroslav Benes
  0 siblings, 0 replies; 20+ messages in thread
From: Miroslav Benes @ 2019-01-09 14:35 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Evgenii Shatokhin, live-patching, linux-kernel, Jessica Yu

On Wed, 9 Jan 2019, Petr Mladek wrote:

> The code for freeing livepatch structures is a bit scattered and tricky:
> 
>   + direct calls to klp_free_*_limited() and kobject_put() are
>     used to release partially initialized objects
> 
>   + klp_free_patch() removes the patch from the public list
>     and releases all objects except for patch->kobj
> 
>   + object_put(&patch->kobj) and the related wait_for_completion()
>     are called directly outside klp_mutex; this code is duplicated;
> 
> Now, we are going to remove the registration stage to simplify the API
> and the code. This would require handling more situations in
> klp_enable_patch() error paths.
> 
> More importantly, we are going to add a feature called atomic replace.
> It will need to dynamically create func and object structures. We will
> want to reuse the existing init() and free() functions. This would
> create even more error path scenarios.
> 
> This patch implements more straightforward free functions:
> 
>   + checks kobj_added flag instead of @limit[*]
> 
>   + initializes patch->list early so that the check for empty list
>     always works
> 
>   + The action(s) that has to be done outside klp_mutex are done
>     in separate klp_free_patch_finish() function. It waits only
>     when patch->kobj was really released via the _start() part.
> 
> The patch does not change the existing behavior.
> 
> [*] We need our own flag to track that the kobject was successfully
>     added to the hierarchy.  Note that kobj.state_initialized only
>     indicates that kobject has been initialized, not whether is has
>     been added (and needs to be removed on cleanup).
> 
> Signed-off-by: Petr Mladek <pmladek@suse.com>
> Cc: Josh Poimboeuf <jpoimboe@redhat.com>
> Cc: Miroslav Benes <mbenes@suse.cz>
> Cc: Jessica Yu <jeyu@kernel.org>
> Cc: Jiri Kosina <jikos@kernel.org>
> Cc: Jason Baron <jbaron@akamai.com>

Acked-by: Miroslav Benes <mbenes@suse.cz>

Miroslav

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v15 04/11] livepatch: Don't block the removal of patches loaded after a forced transition
  2019-01-09 12:43 ` [PATCH v15 04/11] livepatch: Don't block the removal of patches loaded after a forced transition Petr Mladek
@ 2019-01-09 14:50   ` Miroslav Benes
  0 siblings, 0 replies; 20+ messages in thread
From: Miroslav Benes @ 2019-01-09 14:50 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Evgenii Shatokhin, live-patching, linux-kernel

On Wed, 9 Jan 2019, Petr Mladek wrote:

> module_put() is currently never called in klp_complete_transition() when
> klp_force is set. As a result, we might keep the reference count even when
> klp_enable_patch() fails and klp_cancel_transition() is called.
> 
> This might give the impression that a module might get blocked in some
> strange init state. Fortunately, it is not the case. The reference count
> is ignored when mod->init fails and erroneous modules are always removed.
> 
> Anyway, this might be confusing. Instead, this patch moves
> the global klp_forced flag into struct klp_patch. As a result,
> we block only modules that might still be in use after a forced
> transition. Newly loaded livepatches might be eventually completely
> removed later.
> 
> It is not a big deal. But the code is at least consistent with
> the reality.
> 
> Signed-off-by: Petr Mladek <pmladek@suse.com>
> Acked-by: Joe Lawrence <joe.lawrence@redhat.com>

Acked-by: Miroslav Benes <mbenes@suse.cz>

Miroslav

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v15 05/11] livepatch: Simplify API by removing registration step
  2019-01-09 12:43 ` [PATCH v15 05/11] livepatch: Simplify API by removing registration step Petr Mladek
@ 2019-01-10 12:32   ` Miroslav Benes
  0 siblings, 0 replies; 20+ messages in thread
From: Miroslav Benes @ 2019-01-10 12:32 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Evgenii Shatokhin, live-patching, linux-kernel

On Wed, 9 Jan 2019, Petr Mladek wrote:

> The possibility to re-enable a registered patch was useful for immediate
> patches where the livepatch module had to stay until the system reboot.
> The improved consistency model allows to achieve the same result by
> unloading and loading the livepatch module again.
> 
> Also we are going to add a feature called atomic replace. It will allow
> to create a patch that would replace all already registered patches.
> The aim is to handle dependent patches more securely. It will obsolete
> the stack of patches that helped to handle the dependencies so far.
> Then it might be unclear when a cumulative patch re-enabling is safe.
> 
> It would be complicated to support the many modes. Instead we could
> actually make the API and code easier to understand.
> 
> Therefore, remove the two step public API. All the checks and init calls
> are moved from klp_register_patch() to klp_enabled_patch(). Also the patch
> is automatically freed, including the sysfs interface when the transition
> to the disabled state is completed.
> 
> As a result, there is never a disabled patch on the top of the stack.
> Therefore we do not need to check the stack in __klp_enable_patch().
> And we could simplify the check in __klp_disable_patch().
> 
> Also the API and logic is much easier. It is enough to call
> klp_enable_patch() in module_init() call. The patch can be disabled
> by writing '0' into /sys/kernel/livepatch/<patch>/enabled. Then the module
> can be removed once the transition finishes and sysfs interface is freed.
> 
> The only problem is how to free the structures and kobjects safely.
> The operation is triggered from the sysfs interface. We could not put
> the related kobject from there because it would cause lock inversion
> between klp_mutex and kernfs locks, see kn->count lockdep map.
> 
> Therefore, offload the free task to a workqueue. It is perfectly fine:
> 
>   + The patch can no longer be used in the livepatch operations.
> 
>   + The module could not be removed until the free operation finishes
>     and module_put() is called.
> 
>   + The operation is asynchronous already when the first
>     klp_try_complete_transition() fails and another call
>     is queued with a delay.
> 
> Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com>
> Signed-off-by: Petr Mladek <pmladek@suse.com>

Acked-by: Miroslav Benes <mbenes@suse.cz>

Miroslav

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v15 07/11] livepatch: Add atomic replace
  2019-01-09 12:43 ` [PATCH v15 07/11] livepatch: Add atomic replace Petr Mladek
@ 2019-01-10 13:05   ` Miroslav Benes
  0 siblings, 0 replies; 20+ messages in thread
From: Miroslav Benes @ 2019-01-10 13:05 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Evgenii Shatokhin, live-patching, linux-kernel, Jessica Yu

On Wed, 9 Jan 2019, Petr Mladek wrote:

> From: Jason Baron <jbaron@akamai.com>
> 
> Sometimes we would like to revert a particular fix. Currently, this
> is not easy because we want to keep all other fixes active and we
> could revert only the last applied patch.
> 
> One solution would be to apply new patch that implemented all
> the reverted functions like in the original code. It would work
> as expected but there will be unnecessary redirections. In addition,
> it would also require knowing which functions need to be reverted at
> build time.
> 
> Another problem is when there are many patches that touch the same
> functions. There might be dependencies between patches that are
> not enforced on the kernel side. Also it might be pretty hard to
> actually prepare the patch and ensure compatibility with the other
> patches.
> 
> Atomic replace && cumulative patches:
> 
> A better solution would be to create cumulative patch and say that
> it replaces all older ones.
> 
> This patch adds a new "replace" flag to struct klp_patch. When it is
> enabled, a set of 'nop' klp_func will be dynamically created for all
> functions that are already being patched but that will no longer be
> modified by the new patch. They are used as a new target during
> the patch transition.
> 
> The idea is to handle Nops' structures like the static ones. When
> the dynamic structures are allocated, we initialize all values that
> are normally statically defined.
> 
> The only exception is "new_func" in struct klp_func. It has to point
> to the original function and the address is known only when the object
> (module) is loaded. Note that we really need to set it. The address is
> used, for example, in klp_check_stack_func().
> 
> Nevertheless we still need to distinguish the dynamically allocated
> structures in some operations. For this, we add "nop" flag into
> struct klp_func and "dynamic" flag into struct klp_object. They
> need special handling in the following situations:
> 
>   + The structures are added into the lists of objects and functions
>     immediately. In fact, the lists were created for this purpose.
> 
>   + The address of the original function is known only when the patched
>     object (module) is loaded. Therefore it is copied later in
>     klp_init_object_loaded().
> 
>   + The ftrace handler must not set PC to func->new_func. It would cause
>     infinite loop because the address points back to the beginning of
>     the original function.
> 
>   + The various free() functions must free the structure itself.
> 
> Note that other ways to detect the dynamic structures are not considered
> safe. For example, even the statically defined struct klp_object might
> include empty funcs array. It might be there just to run some callbacks.
> 
> Also note that the safe iterator must be used in the free() functions.
> Otherwise already freed structures might get accessed.
> 
> Special callbacks handling:
> 
> The callbacks from the replaced patches are _not_ called by intention.
> It would be pretty hard to define a reasonable semantic and implement it.
> 
> It might even be counter-productive. The new patch is cumulative. It is
> supposed to include most of the changes from older patches. In most cases,
> it will not want to call pre_unpatch() post_unpatch() callbacks from
> the replaced patches. It would disable/break things for no good reasons.
> Also it should be easier to handle various scenarios in a single script
> in the new patch than think about interactions caused by running many
> scripts from older patches. Not to say that the old scripts even would
> not expect to be called in this situation.
> 
> Removing replaced patches:
> 
> One nice effect of the cumulative patches is that the code from the
> older patches is no longer used. Therefore the replaced patches can
> be removed. It has several advantages:
> 
>   + Nops' structs will no longer be necessary and might be removed.
>     This would save memory, restore performance (no ftrace handler),
>     allow clear view on what is really patched.
> 
>   + Disabling the patch will cause using the original code everywhere.
>     Therefore the livepatch callbacks could handle only one scenario.
>     Note that the complication is already complex enough when the patch
>     gets enabled. It is currently solved by calling callbacks only from
>     the new cumulative patch.
> 
>   + The state is clean in both the sysfs interface and lsmod. The modules
>     with the replaced livepatches might even get removed from the system.
> 
> Some people actually expected this behavior from the beginning. After all
> a cumulative patch is supposed to "completely" replace an existing one.
> It is like when a new version of an application replaces an older one.
> 
> This patch does the first step. It removes the replaced patches from
> the list of patches. It is safe. The consistency model ensures that
> they are no longer used. By other words, each process works only with
> the structures from klp_transition_patch.
> 
> The removal is done by a special function. It combines actions done by
> __disable_patch() and klp_complete_transition(). But it is a fast
> track without all the transaction-related stuff.
> 
> Signed-off-by: Jason Baron <jbaron@akamai.com>
> [pmladek@suse.com: Split, reuse existing code, simplified]
> Signed-off-by: Petr Mladek <pmladek@suse.com>
> Cc: Josh Poimboeuf <jpoimboe@redhat.com>
> Cc: Jessica Yu <jeyu@kernel.org>
> Cc: Jiri Kosina <jikos@kernel.org>
> Cc: Miroslav Benes <mbenes@suse.cz>

Acked-by: Miroslav Benes <mbenes@suse.cz>

Miroslav

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v15 08/11] livepatch: Remove Nop structures when unused
  2019-01-09 12:43 ` [PATCH v15 08/11] livepatch: Remove Nop structures when unused Petr Mladek
@ 2019-01-10 13:42   ` Miroslav Benes
  0 siblings, 0 replies; 20+ messages in thread
From: Miroslav Benes @ 2019-01-10 13:42 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Evgenii Shatokhin, live-patching, linux-kernel

On Wed, 9 Jan 2019, Petr Mladek wrote:

> Replaced patches are removed from the stack when the transition is
> finished. It means that Nop structures will never be needed again
> and can be removed. Why should we care?
> 
>   + Nop structures give the impression that the function is patched
>     even though the ftrace handler has no effect.
> 
>   + Ftrace handlers do not come for free. They cause slowdown that might
>     be visible in some workloads. The ftrace-related slowdown might
>     actually be the reason why the function is no longer patched in
>     the new cumulative patch. One would expect that cumulative patch
>     would help solve these problems as well.
> 
>   + Cumulative patches are supposed to replace any earlier version of
>     the patch. The amount of NOPs depends on which version was replaced.
>     This multiplies the amount of scenarios that might happen.
> 
>     One might say that NOPs are innocent. But there are even optimized
>     NOP instructions for different processors, for example, see
>     arch/x86/kernel/alternative.c. And klp_ftrace_handler() is much
>     more complicated.
> 
>   + It sounds natural to clean up a mess that is no longer needed.
>     It could only be worse if we do not do it.
> 
> This patch allows to unpatch and free the dynamic structures independently
> when the transition finishes.
> 
> The free part is a bit tricky because kobject free callbacks are called
> asynchronously. We could not wait for them easily. Fortunately, we do
> not have to. Any further access can be avoided by removing them from
> the dynamic lists.
> 
> Signed-off-by: Petr Mladek <pmladek@suse.com>

Acked-by: Miroslav Benes <mbenes@suse.cz>

Miroslav

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v15 10/11] livepatch: Remove ordering (stacking) of the livepatches
  2019-01-09 12:43 ` [PATCH v15 10/11] livepatch: Remove ordering (stacking) of the livepatches Petr Mladek
@ 2019-01-10 13:56   ` Miroslav Benes
  0 siblings, 0 replies; 20+ messages in thread
From: Miroslav Benes @ 2019-01-10 13:56 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Josh Poimboeuf, Jason Baron, Joe Lawrence,
	Evgenii Shatokhin, live-patching, linux-kernel

On Wed, 9 Jan 2019, Petr Mladek wrote:

> The atomic replace and cumulative patches were introduced as a more secure
> way to handle dependent patches. They simplify the logic:
> 
>   + Any new cumulative patch is supposed to take over shadow variables
>     and changes made by callbacks from previous livepatches.
> 
>   + All replaced patches are discarded and the modules can be unloaded.
>     As a result, there is only one scenario when a cumulative livepatch
>     gets disabled.
> 
> The different handling of "normal" and cumulative patches might cause
> confusion. It would make sense to keep only one mode. On the other hand,
> it would be rude to enforce using the cumulative livepatches even for
> trivial and independent (hot) fixes.
> 
> However, the stack of patches is not really necessary any longer.
> The patch ordering was never clearly visible via the sysfs interface.
> Also the "normal" patches need a lot of caution anyway.
> 
> Note that the list of enabled patches is still necessary but the ordering
> is not longer enforced.
> 
> Otherwise, the code is ready to disable livepatches in an random order.
> Namely, klp_check_stack_func() always looks for the function from
> the livepatch that is being disabled. klp_func structures are just
> removed from the related func_stack. Finally, the ftrace handlers
> is removed only when the func_stack becomes empty.
> 
> Signed-off-by: Petr Mladek <pmladek@suse.com>

Acked-by: Miroslav Benes <mbenes@suse.cz>

Miroslav

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v15 00/11] livepatch: Atomic replace feature
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
                   ` (10 preceding siblings ...)
  2019-01-09 12:43 ` [PATCH v15 11/11] selftests/livepatch: introduce tests Petr Mladek
@ 2019-01-11 17:44 ` Josh Poimboeuf
  2019-01-11 19:56 ` Jiri Kosina
  12 siblings, 0 replies; 20+ messages in thread
From: Josh Poimboeuf @ 2019-01-11 17:44 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Jiri Kosina, Miroslav Benes, Jason Baron, Joe Lawrence,
	Evgenii Shatokhin, live-patching, linux-kernel

On Wed, Jan 09, 2019 at 01:43:18PM +0100, Petr Mladek wrote:
> Changes against v14:
> 
>   + Rename klp_active -> klp_added [Josh]
> 
>   + Do not clear klp_added variable in klp_free*() [Josh]
> 
>   + Rename klp_init_patch_before_free() ->  klp_init_patch_early()
>     [Josh]
> 
>   + Do not call klp_free_patch() when
>     kobject_init_and_add(&patch->kobj) fails in 3rd patch.
>     The final code stays the same after the refactoring
>     in 5th patch. [Josh]
> 
>   + Better 4th patch subject [Josh]
> 
>   + Use safe iterators in klp_free() functions already in 7th
>     patch instead of in 8th [Josh]
> 
>   + Rename free_all -> nops_only and revert the logic in
>     klp_free*() and klp_unpatch_object*() functions [Josh]
> 
>   + Do not refuse loading conflicting patches. Just remove
>     the ordering (stacking) [Josh]
> 
>   + fixed typos, improved wording [all]

Petr,

This looks great to me.  Thanks for all your patience and hard work!

Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>

-- 
Josh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v15 00/11] livepatch: Atomic replace feature
  2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
                   ` (11 preceding siblings ...)
  2019-01-11 17:44 ` [PATCH v15 00/11] livepatch: Atomic replace feature Josh Poimboeuf
@ 2019-01-11 19:56 ` Jiri Kosina
  12 siblings, 0 replies; 20+ messages in thread
From: Jiri Kosina @ 2019-01-11 19:56 UTC (permalink / raw)
  To: Petr Mladek
  Cc: Josh Poimboeuf, Miroslav Benes, Jason Baron, Joe Lawrence,
	Evgenii Shatokhin, live-patching, linux-kernel

Hi,

I've now applied this patchset, so please send any further fixups as 
incremental on top of for-5.1/atomic-replace.

Thanks to everybody involved (especially Petr) for the patience and all 
the invested effort.

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-01-11 19:56 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-09 12:43 [PATCH v15 00/11] livepatch: Atomic replace feature Petr Mladek
2019-01-09 12:43 ` [PATCH v15 01/11] livepatch: Change unsigned long old_addr -> void *old_func in struct klp_func Petr Mladek
2019-01-09 12:43 ` [PATCH v15 02/11] livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code Petr Mladek
2019-01-09 12:43 ` [PATCH v15 03/11] livepatch: Consolidate klp_free functions Petr Mladek
2019-01-09 14:35   ` Miroslav Benes
2019-01-09 12:43 ` [PATCH v15 04/11] livepatch: Don't block the removal of patches loaded after a forced transition Petr Mladek
2019-01-09 14:50   ` Miroslav Benes
2019-01-09 12:43 ` [PATCH v15 05/11] livepatch: Simplify API by removing registration step Petr Mladek
2019-01-10 12:32   ` Miroslav Benes
2019-01-09 12:43 ` [PATCH v15 06/11] livepatch: Use lists to manage patches, objects and functions Petr Mladek
2019-01-09 12:43 ` [PATCH v15 07/11] livepatch: Add atomic replace Petr Mladek
2019-01-10 13:05   ` Miroslav Benes
2019-01-09 12:43 ` [PATCH v15 08/11] livepatch: Remove Nop structures when unused Petr Mladek
2019-01-10 13:42   ` Miroslav Benes
2019-01-09 12:43 ` [PATCH v15 09/11] livepatch: Atomic replace and cumulative patches documentation Petr Mladek
2019-01-09 12:43 ` [PATCH v15 10/11] livepatch: Remove ordering (stacking) of the livepatches Petr Mladek
2019-01-10 13:56   ` Miroslav Benes
2019-01-09 12:43 ` [PATCH v15 11/11] selftests/livepatch: introduce tests Petr Mladek
2019-01-11 17:44 ` [PATCH v15 00/11] livepatch: Atomic replace feature Josh Poimboeuf
2019-01-11 19:56 ` Jiri Kosina

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).