All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] livepatch: introduce atomic replace
@ 2017-07-19 17:18 Jason Baron
  2017-07-19 17:18 ` [PATCH 1/3] livepatch: Add klp_object and klp_func iterators Jason Baron
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Jason Baron @ 2017-07-19 17:18 UTC (permalink / raw)
  To: linux-kernel, live-patching; +Cc: jpoimboe, jeyu, jikos, mbenes, pmladek

Hi,

In testing livepatch, I found that when doing cumulative patches, if a patched
function is completed reverted by a subsequent patch (back to its original state)
livepatch does not revert the funtion to its original state. Specifically, if
patch A introduces a change to function 1, and patch B reverts the change to
function 1 and introduces changes to say function 2 and 3 as well, the change
that patch A introducd to function 1 is still present. This could be addressed
by first completely removing patch A (disable and then rmmod) and then inserting
patch B (insmod and enable), but this leaves an unpatched window. In discussing
this issue with Josh on the kpatch mailing list, he mentioned that we could get
'atomic replace working properly', and that is the direction of this patchset:
https://www.redhat.com/archives/kpatch/2017-June/msg00005.html

Patches:

1) livepatch: Add klp_object and klp_func iterators
	Just a prep patch for the 'atomic revert' feature

2) livepatch: add atomic replace
	Core feature

3) livepatch: Add a sysctl livepatch_mode for atomic replace
	Introduces a knob for enabling atomic replace. I hate knobs and perhaps
	its possible to default to cumulative replace? Although I suspect there
	are workflows relying on the existing behavior - I'm not sure. It may
	be desirable to associate the knob with the patch itself as in the
	'immediate' flag, such that we don't introduce a global sysctl that
	likely would also need to built-in, if there are patches in the initrd.

Thanks,

-Jason

Jason Baron (3):
  livepatch: Add klp_object and klp_func iterators
  livepatch: add atomic replace
  livepatch: Add a sysctl livepatch_mode for atomic revert

 include/linux/livepatch.h     | 118 ++++++++++++++++++++++++++++++--
 kernel/livepatch/core.c       | 154 ++++++++++++++++++++++++++++++++++++++++--
 kernel/livepatch/core.h       |   4 ++
 kernel/livepatch/patch.c      |  23 ++++---
 kernel/livepatch/patch.h      |   1 +
 kernel/livepatch/transition.c |  79 +++++++++++++++++++---
 kernel/sysctl.c               |  12 ++++
 7 files changed, 362 insertions(+), 29 deletions(-)

-- 
2.6.1

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/3] livepatch: Add klp_object and klp_func iterators
  2017-07-19 17:18 [PATCH 0/3] livepatch: introduce atomic replace Jason Baron
@ 2017-07-19 17:18 ` Jason Baron
  2017-08-24 14:25   ` Petr Mladek
  2017-07-19 17:18 ` [PATCH 2/3] livepatch: add atomic replace Jason Baron
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: Jason Baron @ 2017-07-19 17:18 UTC (permalink / raw)
  To: linux-kernel, live-patching; +Cc: jpoimboe, jeyu, jikos, mbenes, pmladek

In preparation to introducing atomic replace, introduce iterators for klp_func
and klp_object, such that objects and functions can be dynmically allocated
(needed for atomic replace). Note that this patch is careful, not to grow the
size of klp_func as that's the most common data structure. This patch is
intended to effectively be a no-op until atomic replace is introduced.

Signed-off-by: Jason Baron <jbaron@akamai.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Cc: Petr Mladek <pmladek@suse.com>
---
 include/linux/livepatch.h     | 106 ++++++++++++++++++++++++++++++++++++++++--
 kernel/livepatch/core.c       |  25 +++++++---
 kernel/livepatch/patch.c      |   9 ++--
 kernel/livepatch/transition.c |  18 ++++---
 4 files changed, 137 insertions(+), 21 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 194991e..5038337 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -24,6 +24,7 @@
 #include <linux/module.h>
 #include <linux/ftrace.h>
 #include <linux/completion.h>
+#include <linux/list.h>
 
 #if IS_ENABLED(CONFIG_LIVEPATCH)
 
@@ -88,10 +89,23 @@ struct klp_func {
 };
 
 /**
+ * struct klp_func_no_op - internal object used to link no_op functions, which
+			   avoids the need to bloat struct klp_func
+ * @orig_func:	embeds struct klp_func
+ * @func_entry:	used link struct klp_func_no_op to struct klp_object
+ */
+struct klp_func_no_op {
+	struct klp_func orig_func;
+	struct list_head func_entry;
+};
+
+/**
  * struct klp_object - kernel object structure for live patching
  * @name:	module name (or NULL for vmlinux)
  * @funcs:	function entries for functions to be patched in the object
  * @kobj:	kobject for sysfs resources
+ * @func_list:	head of list for struct klp_func_no_op
+ * @obj_entry:	used to link struct klp_object to struct klp_patch
  * @mod:	kernel module associated with the patched object
  *		(NULL for vmlinux)
  * @patched:	the object's funcs have been added to the klp_ops list
@@ -103,6 +117,8 @@ struct klp_object {
 
 	/* internal */
 	struct kobject kobj;
+	struct list_head func_list;
+	struct list_head obj_entry;
 	struct module *mod;
 	bool patched;
 };
@@ -114,6 +130,7 @@ struct klp_object {
  * @immediate:  patch all funcs immediately, bypassing safety mechanisms
  * @list:	list node for global list of registered patches
  * @kobj:	kobject for sysfs resources
+ * @obj_list:	head of list for dynamically allocated struct klp_object
  * @enabled:	the patch is enabled (but operation may be incomplete)
  * @finish:	for waiting till it is safe to remove the patch module
  */
@@ -126,17 +143,96 @@ struct klp_patch {
 	/* internal */
 	struct list_head list;
 	struct kobject kobj;
+	struct list_head obj_list;
 	bool enabled;
 	struct completion finish;
 };
 
-#define klp_for_each_object(patch, obj) \
+struct obj_iter {
+	struct klp_object *obj;
+	struct list_head *obj_list_head;
+	struct list_head *obj_list_pos;
+};
+
+static inline struct klp_object *obj_iter_next(struct obj_iter *iter)
+{
+	struct klp_object *obj;
+
+	if (iter->obj->funcs || iter->obj->name) {
+		obj = iter->obj;
+		iter->obj++;
+	} else {
+		if (iter->obj_list_pos == iter->obj_list_head) {
+			obj = NULL;
+		} else {
+			obj = list_entry(iter->obj_list_pos, struct klp_object,
+					 obj_entry);
+			iter->obj_list_pos = iter->obj_list_pos->next;
+		}
+	}
+
+	return obj;
+}
+
+static inline struct klp_object *obj_iter_init(struct klp_patch *patch,
+					       struct obj_iter *iter)
+{
+	iter->obj = patch->objs;
+	iter->obj_list_head = &patch->obj_list;
+	iter->obj_list_pos = iter->obj_list_head->next;
+
+	return obj_iter_next(iter);
+}
+
+#define klp_for_each_object(patch, obj, iter) \
+	for (obj = obj_iter_init(patch, iter); obj; obj = obj_iter_next(iter))
+
+#define klp_for_each_object_core(patch, obj) \
 	for (obj = patch->objs; obj->funcs || obj->name; obj++)
 
-#define klp_for_each_func(obj, func) \
-	for (func = obj->funcs; \
-	     func->old_name || func->new_func || func->old_sympos; \
-	     func++)
+struct func_iter {
+	struct klp_func *func;
+	struct list_head *func_list_head;
+	struct list_head *func_list_pos;
+};
+
+static inline struct klp_func *func_iter_next(struct func_iter *iter)
+{
+	struct klp_func *func;
+	struct klp_func_no_op *func_no_op;
+
+	if (iter->func->old_name || iter->func->new_func ||
+					iter->func->old_sympos) {
+		func = iter->func;
+		iter->func++;
+	} else {
+		if (iter->func_list_pos == iter->func_list_head) {
+			func = NULL;
+		} else {
+			func_no_op = list_entry(iter->func_list_pos,
+						struct klp_func_no_op,
+						func_entry);
+			func = &func_no_op->orig_func;
+			iter->func_list_pos = iter->func_list_pos->next;
+		}
+	}
+
+	return func;
+}
+
+static inline struct klp_func *func_iter_init(struct klp_object *obj,
+					      struct func_iter *iter)
+{
+	iter->func = obj->funcs;
+	iter->func_list_head = &obj->func_list;
+	iter->func_list_pos = iter->func_list_head->next;
+
+	return func_iter_next(iter);
+}
+
+#define klp_for_each_func(obj, func, iter) \
+	for (func = func_iter_init(obj, iter); func; \
+	     func = func_iter_next(iter))
 
 int klp_register_patch(struct klp_patch *);
 int klp_unregister_patch(struct klp_patch *);
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index b9628e4..e63f478 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -346,6 +346,7 @@ EXPORT_SYMBOL_GPL(klp_disable_patch);
 static int __klp_enable_patch(struct klp_patch *patch)
 {
 	struct klp_object *obj;
+	struct obj_iter o_iter;
 	int ret;
 
 	if (klp_transition_patch)
@@ -384,7 +385,7 @@ static int __klp_enable_patch(struct klp_patch *patch)
 	 */
 	smp_wmb();
 
-	klp_for_each_object(patch, obj) {
+	klp_for_each_object(patch, obj, &o_iter) {
 		if (!klp_is_object_loaded(obj))
 			continue;
 
@@ -571,10 +572,11 @@ static void klp_free_funcs_limited(struct klp_object *obj,
 static void klp_free_object_loaded(struct klp_object *obj)
 {
 	struct klp_func *func;
+	struct func_iter f_iter;
 
 	obj->mod = NULL;
 
-	klp_for_each_func(obj, func)
+	klp_for_each_func(obj, func, &f_iter)
 		func->old_addr = 0;
 }
 
@@ -630,6 +632,7 @@ static int klp_init_object_loaded(struct klp_patch *patch,
 				  struct klp_object *obj)
 {
 	struct klp_func *func;
+	struct func_iter f_iter;
 	int ret;
 
 	module_disable_ro(patch->mod);
@@ -642,7 +645,7 @@ static int klp_init_object_loaded(struct klp_patch *patch,
 	arch_klp_init_object_loaded(patch, obj);
 	module_enable_ro(patch->mod, true);
 
-	klp_for_each_func(obj, func) {
+	klp_for_each_func(obj, func, &f_iter) {
 		ret = klp_find_object_symbol(obj->name, func->old_name,
 					     func->old_sympos,
 					     &func->old_addr);
@@ -672,6 +675,7 @@ static int klp_init_object_loaded(struct klp_patch *patch,
 static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
 {
 	struct klp_func *func;
+	struct func_iter f_iter;
 	int ret;
 	const char *name;
 
@@ -689,7 +693,7 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
 	if (ret)
 		return ret;
 
-	klp_for_each_func(obj, func) {
+	klp_for_each_func(obj, func, &f_iter) {
 		ret = klp_init_func(obj, func);
 		if (ret)
 			goto free;
@@ -712,6 +716,7 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
 static int klp_init_patch(struct klp_patch *patch)
 {
 	struct klp_object *obj;
+	struct obj_iter o_iter;
 	int ret;
 
 	if (!patch->objs)
@@ -729,7 +734,11 @@ static int klp_init_patch(struct klp_patch *patch)
 		return ret;
 	}
 
-	klp_for_each_object(patch, obj) {
+	INIT_LIST_HEAD(&patch->obj_list);
+	klp_for_each_object_core(patch, obj)
+		INIT_LIST_HEAD(&obj->func_list);
+
+	klp_for_each_object(patch, obj, &o_iter) {
 		ret = klp_init_object(patch, obj);
 		if (ret)
 			goto free;
@@ -835,6 +844,7 @@ int klp_module_coming(struct module *mod)
 	int ret;
 	struct klp_patch *patch;
 	struct klp_object *obj;
+	struct obj_iter o_iter;
 
 	if (WARN_ON(mod->state != MODULE_STATE_COMING))
 		return -EINVAL;
@@ -848,7 +858,7 @@ int klp_module_coming(struct module *mod)
 	mod->klp_alive = true;
 
 	list_for_each_entry(patch, &klp_patches, list) {
-		klp_for_each_object(patch, obj) {
+		klp_for_each_object(patch, obj, &o_iter) {
 			if (!klp_is_module(obj) || strcmp(obj->name, mod->name))
 				continue;
 
@@ -904,6 +914,7 @@ void klp_module_going(struct module *mod)
 {
 	struct klp_patch *patch;
 	struct klp_object *obj;
+	struct obj_iter o_iter;
 
 	if (WARN_ON(mod->state != MODULE_STATE_GOING &&
 		    mod->state != MODULE_STATE_COMING))
@@ -918,7 +929,7 @@ void klp_module_going(struct module *mod)
 	mod->klp_alive = false;
 
 	list_for_each_entry(patch, &klp_patches, list) {
-		klp_for_each_object(patch, obj) {
+		klp_for_each_object(patch, obj, &o_iter) {
 			if (!klp_is_module(obj) || strcmp(obj->name, mod->name))
 				continue;
 
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 52c4e90..1cfdabc 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -238,8 +238,9 @@ static int klp_patch_func(struct klp_func *func)
 void klp_unpatch_object(struct klp_object *obj)
 {
 	struct klp_func *func;
+	struct func_iter f_iter;
 
-	klp_for_each_func(obj, func)
+	klp_for_each_func(obj, func, &f_iter)
 		if (func->patched)
 			klp_unpatch_func(func);
 
@@ -249,12 +250,13 @@ void klp_unpatch_object(struct klp_object *obj)
 int klp_patch_object(struct klp_object *obj)
 {
 	struct klp_func *func;
+	struct func_iter f_iter;
 	int ret;
 
 	if (WARN_ON(obj->patched))
 		return -EINVAL;
 
-	klp_for_each_func(obj, func) {
+	klp_for_each_func(obj, func, &f_iter) {
 		ret = klp_patch_func(func);
 		if (ret) {
 			klp_unpatch_object(obj);
@@ -269,8 +271,9 @@ int klp_patch_object(struct klp_object *obj)
 void klp_unpatch_objects(struct klp_patch *patch)
 {
 	struct klp_object *obj;
+	struct obj_iter o_iter;
 
-	klp_for_each_object(patch, obj)
+	klp_for_each_object(patch, obj, &o_iter)
 		if (obj->patched)
 			klp_unpatch_object(obj);
 }
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index b004a1f..e112826 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -81,6 +81,8 @@ static void klp_complete_transition(void)
 	struct task_struct *g, *task;
 	unsigned int cpu;
 	bool immediate_func = false;
+	struct obj_iter o_iter;
+	struct func_iter f_iter;
 
 	if (klp_target_state == KLP_UNPATCHED) {
 		/*
@@ -101,8 +103,8 @@ static void klp_complete_transition(void)
 	if (klp_transition_patch->immediate)
 		goto done;
 
-	klp_for_each_object(klp_transition_patch, obj) {
-		klp_for_each_func(obj, func) {
+	klp_for_each_object(klp_transition_patch, obj, &o_iter) {
+		klp_for_each_func(obj, func, &f_iter) {
 			func->transition = false;
 			if (func->immediate)
 				immediate_func = true;
@@ -244,6 +246,8 @@ static int klp_check_stack(struct task_struct *task, char *err_buf)
 	struct stack_trace trace;
 	struct klp_object *obj;
 	struct klp_func *func;
+	struct obj_iter o_iter;
+	struct func_iter f_iter;
 	int ret;
 
 	trace.skip = 0;
@@ -259,10 +263,10 @@ static int klp_check_stack(struct task_struct *task, char *err_buf)
 		return ret;
 	}
 
-	klp_for_each_object(klp_transition_patch, obj) {
+	klp_for_each_object(klp_transition_patch, obj, &o_iter) {
 		if (!obj->patched)
 			continue;
-		klp_for_each_func(obj, func) {
+		klp_for_each_func(obj, func, &f_iter) {
 			ret = klp_check_stack_func(func, &trace);
 			if (ret) {
 				snprintf(err_buf, STACK_ERR_BUF_SIZE,
@@ -470,6 +474,8 @@ void klp_init_transition(struct klp_patch *patch, int state)
 	unsigned int cpu;
 	struct klp_object *obj;
 	struct klp_func *func;
+	struct obj_iter o_iter;
+	struct func_iter f_iter;
 	int initial_state = !state;
 
 	WARN_ON_ONCE(klp_target_state != KLP_UNDEFINED);
@@ -531,8 +537,8 @@ void klp_init_transition(struct klp_patch *patch, int state)
 	 * When unpatching, the funcs are already in the func_stack and so are
 	 * already visible to the ftrace handler.
 	 */
-	klp_for_each_object(patch, obj)
-		klp_for_each_func(obj, func)
+	klp_for_each_object(patch, obj, &o_iter)
+		klp_for_each_func(obj, func, &f_iter)
 			func->transition = true;
 }
 
-- 
2.6.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/3] livepatch: add atomic replace
  2017-07-19 17:18 [PATCH 0/3] livepatch: introduce atomic replace Jason Baron
  2017-07-19 17:18 ` [PATCH 1/3] livepatch: Add klp_object and klp_func iterators Jason Baron
@ 2017-07-19 17:18 ` Jason Baron
  2017-08-25  9:26   ` Petr Mladek
  2017-07-19 17:18 ` [PATCH 3/3] livepatch: Add a sysctl livepatch_mode for " Jason Baron
  2017-07-21 13:06 ` [PATCH 0/3] livepatch: introduce " Miroslav Benes
  3 siblings, 1 reply; 15+ messages in thread
From: Jason Baron @ 2017-07-19 17:18 UTC (permalink / raw)
  To: linux-kernel, live-patching; +Cc: jpoimboe, jeyu, jikos, mbenes, pmladek

When doing cumulative patches, if patch A introduces a change to function 1,
and patch B reverts the change to function 1 and introduces changes to say
function 2 and 3 as well, the change that patch A introduced to function 1
is still present.

Introduce atomic replace, by first creating a 'no_op' klp_func for all
the functions that are reverted by patch B. The reason that 'no_op'
klp_funcs are created, instead of just unregistering directly from the ftrace
function hook, is to ensure that the consistency model is properly preserved.
By introducing the 'no_op' functions, 'atomic revert' leverages the existing
consistency model code. Then, after transition to the new code, 'atomic
revert' unregisters the ftrace handlers that are associated with the 'no_op'
klp_funcs, such that that we run the original un-patched function with no
additional no_op function overhead.

Since 'atomic replace' has completely replaced any previous livepatch modules,
it explicitly disables the previous patch, in the example- patch A, such that
the livepatch module associated with patch A can be completely removed (rmmod).
Patch A is now in a permanently disabled state. But if patch A is removed from
the kernel with rmmod, it can be re-inserted (insmod), and act as an atomic
replace on top of patch B.

Signed-off-by: Jason Baron <jbaron@akamai.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Cc: Petr Mladek <pmladek@suse.com>
---
 include/linux/livepatch.h     |   8 ++-
 kernel/livepatch/core.c       | 124 ++++++++++++++++++++++++++++++++++++++++++
 kernel/livepatch/core.h       |   4 ++
 kernel/livepatch/patch.c      |  14 +++--
 kernel/livepatch/patch.h      |   1 +
 kernel/livepatch/transition.c |  61 ++++++++++++++++++++-
 6 files changed, 202 insertions(+), 10 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 5038337..6fd7222 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -49,6 +49,7 @@
  * @new_size:	size of the new function
  * @patched:	the func has been added to the klp_ops list
  * @transition:	the func is currently being applied or reverted
+ * @no_op:	this is a no_op function used to compelete revert a function
  *
  * The patched and transition variables define the func's patching state.  When
  * patching, a func is always in one of the following states:
@@ -86,6 +87,7 @@ struct klp_func {
 	unsigned long old_size, new_size;
 	bool patched;
 	bool transition;
+	bool no_op;
 };
 
 /**
@@ -132,6 +134,7 @@ struct klp_object {
  * @kobj:	kobject for sysfs resources
  * @obj_list:	head of list for dynamically allocated struct klp_object
  * @enabled:	the patch is enabled (but operation may be incomplete)
+ * @replaced:	the patch has been replaced an can not be re-enabled
  * @finish:	for waiting till it is safe to remove the patch module
  */
 struct klp_patch {
@@ -145,6 +148,7 @@ struct klp_patch {
 	struct kobject kobj;
 	struct list_head obj_list;
 	bool enabled;
+	bool replaced;
 	struct completion finish;
 };
 
@@ -201,8 +205,8 @@ static inline struct klp_func *func_iter_next(struct func_iter *iter)
 	struct klp_func *func;
 	struct klp_func_no_op *func_no_op;
 
-	if (iter->func->old_name || iter->func->new_func ||
-					iter->func->old_sympos) {
+	if (iter->func && (iter->func->old_name || iter->func->new_func ||
+			   iter->func->old_sympos)) {
 		func = iter->func;
 		iter->func++;
 	} else {
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index e63f478..bf353da 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -352,6 +352,9 @@ static int __klp_enable_patch(struct klp_patch *patch)
 	if (klp_transition_patch)
 		return -EBUSY;
 
+	if (patch->replaced)
+		return -EINVAL;
+
 	if (WARN_ON(patch->enabled))
 		return -EINVAL;
 
@@ -602,6 +605,118 @@ static void klp_free_patch(struct klp_patch *patch)
 		list_del(&patch->list);
 }
 
+void klp_patch_free_no_ops(struct klp_patch *patch)
+{
+	struct obj_iter o_iter;
+	struct func_iter f_iter;
+	struct klp_object *obj, *tmp_obj;
+	struct klp_func *func;
+	struct klp_func_no_op *func_no_op;
+
+	klp_for_each_object(patch, obj, &o_iter) {
+		klp_for_each_func(obj, func, &f_iter) {
+			if (func->no_op) {
+				func_no_op = container_of(func,
+							  struct klp_func_no_op,
+							  orig_func);
+				list_del_init(&func_no_op->func_entry);
+				kfree(func_no_op);
+			}
+		}
+	}
+	list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, obj_entry) {
+		list_del_init(&obj->obj_entry);
+		kfree(obj);
+	}
+}
+
+static int klp_init_patch_no_ops(struct klp_patch *patch)
+{
+	struct klp_object *obj, *prev_obj, *new_obj;
+	struct klp_func *prev_func, *func;
+	struct klp_func_no_op *new;
+	struct klp_patch *prev_patch;
+	struct obj_iter o_iter, prev_o_iter;
+	struct func_iter prev_f_iter, f_iter;
+	bool found, mod;
+
+	if (patch->list.prev == &klp_patches)
+		return 0;
+
+	prev_patch = list_prev_entry(patch, list);
+	klp_for_each_object(prev_patch, prev_obj, &prev_o_iter) {
+		if (!klp_is_object_loaded(prev_obj))
+			continue;
+
+		klp_for_each_func(prev_obj, prev_func, &prev_f_iter) {
+			found = false;
+			klp_for_each_object(patch, obj, &o_iter) {
+				klp_for_each_func(obj, func, &f_iter) {
+					if ((strcmp(prev_func->old_name,
+						    func->old_name) == 0) &&
+						(prev_func->old_sympos ==
+							func->old_sympos)) {
+						found = true;
+						break;
+					}
+				}
+				if (found)
+					break;
+			}
+			if (found)
+				continue;
+
+			new = kmalloc(sizeof(*new), GFP_KERNEL);
+			if (!new)
+				return -ENOMEM;
+			new->orig_func = *prev_func;
+			new->orig_func.old_name = prev_func->old_name;
+			new->orig_func.new_func = NULL;
+			new->orig_func.old_sympos = prev_func->old_sympos;
+			new->orig_func.immediate = prev_func->immediate;
+			new->orig_func.old_addr = prev_func->old_addr;
+			INIT_LIST_HEAD(&new->orig_func.stack_node);
+			new->orig_func.old_size = prev_func->old_size;
+			new->orig_func.new_size = 0;
+			new->orig_func.no_op = true;
+			new->orig_func.patched = false;
+			new->orig_func.transition = false;
+			found = false;
+			mod = klp_is_module(prev_obj);
+			klp_for_each_object(patch, obj, &o_iter) {
+				if (mod) {
+					if (klp_is_module(obj) &&
+					    strcmp(prev_obj->name,
+						   obj->name) == 0) {
+						found = true;
+						break;
+					}
+				} else if (!klp_is_module(obj)) {
+					found = true;
+					break;
+				}
+			}
+			if (found) {
+				list_add(&new->func_entry, &obj->func_list);
+			} else {
+				new_obj = kmalloc(sizeof(*new_obj), GFP_KERNEL);
+				if (!new_obj)
+					return -ENOMEM;
+				new_obj->name = prev_obj->name;
+				new_obj->funcs = NULL;
+				new_obj->mod = prev_obj->mod;
+				new_obj->patched = false;
+				INIT_LIST_HEAD(&new_obj->func_list);
+				INIT_LIST_HEAD(&new_obj->obj_entry);
+				list_add(&new->func_entry, &new_obj->func_list);
+				list_add(&new_obj->obj_entry, &patch->obj_list);
+			}
+		}
+	}
+
+	return 0;
+}
+
 static int klp_init_func(struct klp_object *obj, struct klp_func *func)
 {
 	if (!func->old_name || !func->new_func)
@@ -725,6 +840,7 @@ static int klp_init_patch(struct klp_patch *patch)
 	mutex_lock(&klp_mutex);
 
 	patch->enabled = false;
+	patch->replaced = false;
 	init_completion(&patch->finish);
 
 	ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
@@ -746,12 +862,19 @@ static int klp_init_patch(struct klp_patch *patch)
 
 	list_add_tail(&patch->list, &klp_patches);
 
+	ret = klp_init_patch_no_ops(patch);
+	if (ret) {
+		list_del(&patch->list);
+		goto free;
+	}
+
 	mutex_unlock(&klp_mutex);
 
 	return 0;
 
 free:
 	klp_free_objects_limited(patch, obj);
+	klp_patch_free_no_ops(patch);
 
 	mutex_unlock(&klp_mutex);
 
@@ -786,6 +909,7 @@ int klp_unregister_patch(struct klp_patch *patch)
 	}
 
 	klp_free_patch(patch);
+	klp_patch_free_no_ops(patch);
 
 	mutex_unlock(&klp_mutex);
 
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index c74f24c..fa20e4d 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -1,6 +1,10 @@
 #ifndef _LIVEPATCH_CORE_H
 #define _LIVEPATCH_CORE_H
 
+#include <linux/livepatch.h>
+
 extern struct mutex klp_mutex;
 
+void klp_patch_free_no_ops(struct klp_patch *patch);
+
 #endif /* _LIVEPATCH_CORE_H */
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 1cfdabc..cbb8b9d 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -117,6 +117,8 @@ static void notrace klp_ftrace_handler(unsigned long ip,
 		}
 	}
 
+	if (func->no_op)
+		goto unlock;
 	klp_arch_set_pc(regs, (unsigned long)func->new_func);
 unlock:
 	preempt_enable_notrace();
@@ -135,7 +137,7 @@ static unsigned long klp_get_ftrace_location(unsigned long faddr)
 }
 #endif
 
-static void klp_unpatch_func(struct klp_func *func)
+void klp_unpatch_func(struct klp_func *func, bool unregistered)
 {
 	struct klp_ops *ops;
 
@@ -155,9 +157,11 @@ static void klp_unpatch_func(struct klp_func *func)
 		if (WARN_ON(!ftrace_loc))
 			return;
 
-		WARN_ON(unregister_ftrace_function(&ops->fops));
-		WARN_ON(ftrace_set_filter_ip(&ops->fops, ftrace_loc, 1, 0));
-
+		if (!unregistered) {
+			WARN_ON(unregister_ftrace_function(&ops->fops));
+			WARN_ON(ftrace_set_filter_ip(&ops->fops, ftrace_loc, 1,
+				0));
+		}
 		list_del_rcu(&func->stack_node);
 		list_del(&ops->node);
 		kfree(ops);
@@ -242,7 +246,7 @@ void klp_unpatch_object(struct klp_object *obj)
 
 	klp_for_each_func(obj, func, &f_iter)
 		if (func->patched)
-			klp_unpatch_func(func);
+			klp_unpatch_func(func, false);
 
 	obj->patched = false;
 }
diff --git a/kernel/livepatch/patch.h b/kernel/livepatch/patch.h
index 0db2271..59c1430 100644
--- a/kernel/livepatch/patch.h
+++ b/kernel/livepatch/patch.h
@@ -29,5 +29,6 @@ struct klp_ops *klp_find_ops(unsigned long old_addr);
 int klp_patch_object(struct klp_object *obj);
 void klp_unpatch_object(struct klp_object *obj);
 void klp_unpatch_objects(struct klp_patch *patch);
+void klp_unpatch_func(struct klp_func *func, bool unregistered);
 
 #endif /* _LIVEPATCH_PATCH_H */
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index e112826..43e1609 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -21,6 +21,8 @@
 
 #include <linux/cpu.h>
 #include <linux/stacktrace.h>
+#include <linux/ftrace.h>
+#include <linux/delay.h>
 #include "core.h"
 #include "patch.h"
 #include "transition.h"
@@ -70,6 +72,7 @@ static void klp_synchronize_transition(void)
 	schedule_on_each_cpu(klp_sync);
 }
 
+
 /*
  * The transition to the target patch state is complete.  Clean up the data
  * structures.
@@ -81,8 +84,32 @@ static void klp_complete_transition(void)
 	struct task_struct *g, *task;
 	unsigned int cpu;
 	bool immediate_func = false;
+	bool no_op = false;
 	struct obj_iter o_iter;
 	struct func_iter f_iter;
+	unsigned long ftrace_loc;
+	struct klp_ops *ops;
+	struct klp_patch *prev_patch;
+
+	/* remove ftrace hook for all no_op functions. */
+	if (klp_target_state == KLP_PATCHED) {
+		klp_for_each_object(klp_transition_patch, obj, &o_iter) {
+			klp_for_each_func(obj, func, &f_iter) {
+				if (!func->no_op)
+					continue;
+
+				ops = klp_find_ops(func->old_addr);
+				if (WARN_ON(!ops))
+					continue;
+				ftrace_loc = func->old_addr;
+				WARN_ON(unregister_ftrace_function(&ops->fops));
+				WARN_ON(ftrace_set_filter_ip(&ops->fops,
+							     ftrace_loc,
+							     1, 0));
+				no_op = true;
+			}
+		}
+	}
 
 	if (klp_target_state == KLP_UNPATCHED) {
 		/*
@@ -90,7 +117,9 @@ static void klp_complete_transition(void)
 		 * remove the new functions from the func_stack.
 		 */
 		klp_unpatch_objects(klp_transition_patch);
+	}
 
+	if (klp_target_state == KLP_UNPATCHED || no_op) {
 		/*
 		 * Make sure klp_ftrace_handler() can no longer see functions
 		 * from this patch on the ops->func_stack.  Otherwise, after
@@ -132,6 +161,24 @@ static void klp_complete_transition(void)
 	}
 
 done:
+	/* remove and free any no_op functions */
+	if (no_op && klp_target_state == KLP_PATCHED) {
+		prev_patch = list_prev_entry(klp_transition_patch, list);
+		if (prev_patch->enabled) {
+			klp_unpatch_objects(prev_patch);
+			prev_patch->enabled = false;
+			prev_patch->replaced = true;
+			module_put(prev_patch->mod);
+		}
+		klp_for_each_object(klp_transition_patch, obj, &o_iter) {
+			klp_for_each_func(obj, func, &f_iter) {
+				if (func->no_op)
+					klp_unpatch_func(func, true);
+			}
+		}
+		klp_patch_free_no_ops(klp_transition_patch);
+	}
+
 	klp_target_state = KLP_UNDEFINED;
 	klp_transition_patch = NULL;
 }
@@ -204,10 +251,18 @@ static int klp_check_stack_func(struct klp_func *func,
 		if (klp_target_state == KLP_UNPATCHED) {
 			 /*
 			  * Check for the to-be-unpatched function
-			  * (the func itself).
+			  * (the func itself). If we're unpatching
+			  * a no-op, then we're running the original
+			  * function. We never 'patch' a no-op function,
+			  * since we just remove the ftrace hook.
 			  */
-			func_addr = (unsigned long)func->new_func;
-			func_size = func->new_size;
+			if (func->no_op) {
+				func_addr = (unsigned long)func->old_addr;
+				func_size = func->old_size;
+			} else {
+				func_addr = (unsigned long)func->new_func;
+				func_size = func->new_size;
+			}
 		} else {
 			/*
 			 * Check for the to-be-patched function
-- 
2.6.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/3] livepatch: Add a sysctl livepatch_mode for atomic replace
  2017-07-19 17:18 [PATCH 0/3] livepatch: introduce atomic replace Jason Baron
  2017-07-19 17:18 ` [PATCH 1/3] livepatch: Add klp_object and klp_func iterators Jason Baron
  2017-07-19 17:18 ` [PATCH 2/3] livepatch: add atomic replace Jason Baron
@ 2017-07-19 17:18 ` Jason Baron
  2017-07-21 13:06 ` [PATCH 0/3] livepatch: introduce " Miroslav Benes
  3 siblings, 0 replies; 15+ messages in thread
From: Jason Baron @ 2017-07-19 17:18 UTC (permalink / raw)
  To: linux-kernel, live-patching; +Cc: jpoimboe, jeyu, jikos, mbenes, pmladek

Introduce a sysctl knob such that by default livepatch is not in
'atomic replace' mode. A '0' in /proc/sys/kernel/livepatch_mode means the
current default mode, while a '1' means do atomic replace.

Signed-off-by: Jason Baron <jbaron@akamai.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Cc: Petr Mladek <pmladek@suse.com>
---
 include/linux/livepatch.h |  8 ++++++++
 kernel/livepatch/core.c   |  5 +++++
 kernel/sysctl.c           | 12 ++++++++++++
 3 files changed, 25 insertions(+)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 6fd7222..08e760a 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -35,6 +35,14 @@
 #define KLP_UNPATCHED	 0
 #define KLP_PATCHED	 1
 
+/* livepatch mode */
+
+extern int sysctl_livepatch_mode;
+enum {
+	LIVEPATCH_MODE_DEFAULT,
+	LIVEPATCH_MODE_REPLACE,
+};
+
 /**
  * struct klp_func - function structure for live patching
  * @old_name:	name of the function to be patched
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index bf353da..b1df5c4 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -49,6 +49,8 @@ static LIST_HEAD(klp_patches);
 
 static struct kobject *klp_root_kobj;
 
+int sysctl_livepatch_mode;
+
 static bool klp_is_module(struct klp_object *obj)
 {
 	return obj->name;
@@ -643,6 +645,9 @@ static int klp_init_patch_no_ops(struct klp_patch *patch)
 	if (patch->list.prev == &klp_patches)
 		return 0;
 
+	if (sysctl_livepatch_mode != LIVEPATCH_MODE_REPLACE)
+		return 0;
+
 	prev_patch = list_prev_entry(patch, list);
 	klp_for_each_object(prev_patch, prev_obj, &prev_o_iter) {
 		if (!klp_is_object_loaded(prev_obj))
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 4dfba1a..3a0a1f6 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -67,6 +67,7 @@
 #include <linux/kexec.h>
 #include <linux/bpf.h>
 #include <linux/mount.h>
+#include <linux/livepatch.h>
 
 #include <linux/uaccess.h>
 #include <asm/processor.h>
@@ -1203,6 +1204,17 @@ static struct ctl_table kern_table[] = {
 		.extra2		= &one,
 	},
 #endif
+#ifdef CONFIG_LIVEPATCH
+	{
+		.procname	= "livepatch_mode",
+		.data		= &sysctl_livepatch_mode,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec_minmax,
+		.extra1		= &zero,
+		.extra2		= &one,
+	},
+#endif
 	{ }
 };
 
-- 
2.6.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/3] livepatch: introduce atomic replace
  2017-07-19 17:18 [PATCH 0/3] livepatch: introduce atomic replace Jason Baron
                   ` (2 preceding siblings ...)
  2017-07-19 17:18 ` [PATCH 3/3] livepatch: Add a sysctl livepatch_mode for " Jason Baron
@ 2017-07-21 13:06 ` Miroslav Benes
  2017-07-21 17:56   ` Jason Baron
  3 siblings, 1 reply; 15+ messages in thread
From: Miroslav Benes @ 2017-07-21 13:06 UTC (permalink / raw)
  To: Jason Baron; +Cc: linux-kernel, live-patching, jpoimboe, jeyu, jikos, pmladek

On Wed, 19 Jul 2017, Jason Baron wrote:

> Hi,
> 
> In testing livepatch, I found that when doing cumulative patches, if a patched
> function is completed reverted by a subsequent patch (back to its original state)
> livepatch does not revert the funtion to its original state. Specifically, if
> patch A introduces a change to function 1, and patch B reverts the change to
> function 1 and introduces changes to say function 2 and 3 as well, the change
> that patch A introducd to function 1 is still present. This could be addressed
> by first completely removing patch A (disable and then rmmod) and then inserting
> patch B (insmod and enable), but this leaves an unpatched window. In discussing
> this issue with Josh on the kpatch mailing list, he mentioned that we could get
> 'atomic replace working properly', and that is the direction of this patchset:
> https://www.redhat.com/archives/kpatch/2017-June/msg00005.html

Hi Jason,

this has been on my TODO list for a long time now, so thanks for working 
on this. We have the same feature in kGraft and we use it heavily (in fact 
we distribute our patches as cumulative and "replace_all" how we call it).

The forward port of the feature from kGraft is unfortunately not 
straightforward. We do not have a concept of klp_target_state there, so we 
can freely let functions to be patched or reverted in one go. We cannot do 
the same upstream. At first glance, you used nop function exactly for this 
case. Nice hack :).
 
> Patches:
> 
> 1) livepatch: Add klp_object and klp_func iterators
> 	Just a prep patch for the 'atomic revert' feature
> 
> 2) livepatch: add atomic replace
> 	Core feature
> 
> 3) livepatch: Add a sysctl livepatch_mode for atomic replace
> 	Introduces a knob for enabling atomic replace. I hate knobs and perhaps
> 	its possible to default to cumulative replace? Although I suspect there
> 	are workflows relying on the existing behavior - I'm not sure. It may
> 	be desirable to associate the knob with the patch itself as in the
> 	'immediate' flag, such that we don't introduce a global sysctl that
> 	likely would also need to built-in, if there are patches in the initrd.

Yes. I think it should be associated with the patch itself. This would 
allow more flexible behaviour. You could stack more patches on top of 
"atomic replace" patch.

Anyway, I'm on holiday next week, so I'll take a proper look the week 
after.

Thanks,
Miroslav

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/3] livepatch: introduce atomic replace
  2017-07-21 13:06 ` [PATCH 0/3] livepatch: introduce " Miroslav Benes
@ 2017-07-21 17:56   ` Jason Baron
  2017-08-10 11:12     ` Miroslav Benes
  0 siblings, 1 reply; 15+ messages in thread
From: Jason Baron @ 2017-07-21 17:56 UTC (permalink / raw)
  To: Miroslav Benes
  Cc: linux-kernel, live-patching, jpoimboe, jeyu, jikos, pmladek



On 07/21/2017 09:06 AM, Miroslav Benes wrote:
> On Wed, 19 Jul 2017, Jason Baron wrote:
>
>> Hi,
>>
>> In testing livepatch, I found that when doing cumulative patches, if a patched
>> function is completed reverted by a subsequent patch (back to its original state)
>> livepatch does not revert the funtion to its original state. Specifically, if
>> patch A introduces a change to function 1, and patch B reverts the change to
>> function 1 and introduces changes to say function 2 and 3 as well, the change
>> that patch A introducd to function 1 is still present. This could be addressed
>> by first completely removing patch A (disable and then rmmod) and then inserting
>> patch B (insmod and enable), but this leaves an unpatched window. In discussing
>> this issue with Josh on the kpatch mailing list, he mentioned that we could get
>> 'atomic replace working properly', and that is the direction of this patchset:
>> https://www.redhat.com/archives/kpatch/2017-June/msg00005.html
>
> Hi Jason,
>
> this has been on my TODO list for a long time now, so thanks for working
> on this. We have the same feature in kGraft and we use it heavily (in fact
> we distribute our patches as cumulative and "replace_all" how we call it).
>

Hi Miroslav,

Cool - we feel like this is an important feature as well and would like 
to have an upstream solution as well.

> The forward port of the feature from kGraft is unfortunately not
> straightforward. We do not have a concept of klp_target_state there, so we
> can freely let functions to be patched or reverted in one go. We cannot do
> the same upstream. At first glance, you used nop function exactly for this
> case. Nice hack :).
>
>> Patches:
>>
>> 1) livepatch: Add klp_object and klp_func iterators
>> 	Just a prep patch for the 'atomic revert' feature
>>
>> 2) livepatch: add atomic replace
>> 	Core feature
>>
>> 3) livepatch: Add a sysctl livepatch_mode for atomic replace
>> 	Introduces a knob for enabling atomic replace. I hate knobs and perhaps
>> 	its possible to default to cumulative replace? Although I suspect there
>> 	are workflows relying on the existing behavior - I'm not sure. It may
>> 	be desirable to associate the knob with the patch itself as in the
>> 	'immediate' flag, such that we don't introduce a global sysctl that
>> 	likely would also need to built-in, if there are patches in the initrd.
>
> Yes. I think it should be associated with the patch itself. This would
> allow more flexible behaviour. You could stack more patches on top of
> "atomic replace" patch.
>

Ok - associating the "atomic replace" with the patch itself makes sense 
to me. It would also basically work, I think with the patch I proposed 
except for the case where the the "atomic replace" was on top of several 
non-"atomic replace" patches. The reason is that the "atomic replace" I 
posted looks back 1 patch to see what it needs to replace (assuming all 
patches are in atomic replace mode). So instead of just looking back 1 
patch, it could 'look back' and make sure it was replacing all 
previously loaded patches.


> Anyway, I'm on holiday next week, so I'll take a proper look the week
> after.
>

Ok - have a nice holiday!

Thanks,

-Jason


> Thanks,
> Miroslav
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/3] livepatch: introduce atomic replace
  2017-07-21 17:56   ` Jason Baron
@ 2017-08-10 11:12     ` Miroslav Benes
  2017-08-10 14:56       ` Jason Baron
  0 siblings, 1 reply; 15+ messages in thread
From: Miroslav Benes @ 2017-08-10 11:12 UTC (permalink / raw)
  To: Jason Baron; +Cc: linux-kernel, live-patching, jpoimboe, jeyu, jikos, pmladek


> Ok - associating the "atomic replace" with the patch itself makes sense to me.
> It would also basically work, I think with the patch I proposed except for the
> case where the the "atomic replace" was on top of several non-"atomic replace"
> patches. The reason is that the "atomic replace" I posted looks back 1 patch
> to see what it needs to replace (assuming all patches are in atomic replace
> mode). So instead of just looking back 1 patch, it could 'look back' and make
> sure it was replacing all previously loaded patches.

Yes, this should not be a problem to implement with the current code.

But I must say I am not entirely happy with the code as it is. The big 
advantage is that there are almost no changes to the consistency model. 
Almost everything is created and allocated during the initialization and 
the patch then looks ordinary. The current code can handle it smoothly. 
"Dynamic" superstructure is what I am not happy with. It is too 
complicated in my opinion. On the other hand I don't see a simple way out 
now.

1. We can try to make your code "nicer".

2. The main problem is that all user structures (klp_func, klp_object, 
klp_patch) are statically allocated in a kernel module. You need to add 
dynamically allocated structures, which is not easy. We can either change 
everything to dynamic allocation, or make everything static. As far as the 
former is concerned, we've been down that road IIRC. It didn't end up 
well. Is the latter even possible?

3. We could bypass everything and register special no-op fops to each 
reverted function. Those could be held and handled separately. It could be 
simpler, but it might be problematic to deal with the consistency model 
interaction.

Thoughts?

Regards,
Miroslav

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/3] livepatch: introduce atomic replace
  2017-08-10 11:12     ` Miroslav Benes
@ 2017-08-10 14:56       ` Jason Baron
  2017-08-15  9:26         ` Miroslav Benes
  0 siblings, 1 reply; 15+ messages in thread
From: Jason Baron @ 2017-08-10 14:56 UTC (permalink / raw)
  To: Miroslav Benes
  Cc: linux-kernel, live-patching, jpoimboe, jeyu, jikos, pmladek



On 08/10/2017 07:12 AM, Miroslav Benes wrote:
>
>> Ok - associating the "atomic replace" with the patch itself makes sense to me.
>> It would also basically work, I think with the patch I proposed except for the
>> case where the the "atomic replace" was on top of several non-"atomic replace"
>> patches. The reason is that the "atomic replace" I posted looks back 1 patch
>> to see what it needs to replace (assuming all patches are in atomic replace
>> mode). So instead of just looking back 1 patch, it could 'look back' and make
>> sure it was replacing all previously loaded patches.
>
> Yes, this should not be a problem to implement with the current code.
>
> But I must say I am not entirely happy with the code as it is. The big
> advantage is that there are almost no changes to the consistency model.
> Almost everything is created and allocated during the initialization and
> the patch then looks ordinary. The current code can handle it smoothly.
> "Dynamic" superstructure is what I am not happy with. It is too
> complicated in my opinion. On the other hand I don't see a simple way out
> now.

Indeed. It is possible create the 'no-op' functions statically and 
relative to a previous livepatch module at build time, but I didn't like 
this approach because it means that the created 'no-op' functions are 
tied to a specific prior livepatch module, which it is potentially also 
inconvenient to keep around. So for me, the dynamic creation of the 
'no-op' functions is important.

Given that premise, the posted patch adds an additional linked list head 
to klp_patch (to add the dynamic klp_objects), and an additional linked 
list head to the klp_object (to add the dynamic klp_funcs). It hides 
this mostly behind the  klp_for_each_object() and klp_for_each_func(), 
such that a lot of the code does not need to be adjusted. Note as well 
that I think in general the lists of 'no-ops' should be fairly small, 
and that as soon as the live patch module is enabled the linked lists 
are removed and thus the livepatch module looks the same as before (only 
contains the static allocations).

>
> 1. We can try to make your code "nicer".
>

Sure - I'm happy to incorporate any suggestions, and I could re-review 
to try and simplify further.


> 2. The main problem is that all user structures (klp_func, klp_object,
> klp_patch) are statically allocated in a kernel module. You need to add
> dynamically allocated structures, which is not easy. We can either change
> everything to dynamic allocation, or make everything static. As far as the
> former is concerned, we've been down that road IIRC. It didn't end up
> well. Is the latter even possible?
>

I think making everything static is possible - but I think it ties 
things to the previously loaded module, which I don't think is 
desirable. That said we could also potentially reallocate() the memory 
for the static regions and increase them to the required size (although 
it may be a little tricky to free the original static region b/c other 
data structures may be stored nearby?). I think its also nice that the 
current patch frees these 'no-op' functions when no longer needed, so by 
keeping them separate, this is somewhat simpler.


> 3. We could bypass everything and register special no-op fops to each
> reverted function. Those could be held and handled separately. It could be
> simpler, but it might be problematic to deal with the consistency model
> interaction.
>

Perhaps...I sort of feel like this moves us closer to handling these 
differently, whereas the proposal hides a lot of this behind 
klp_for_each_object() and klp_for_each_func().


Thanks,

-Jason

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/3] livepatch: introduce atomic replace
  2017-08-10 14:56       ` Jason Baron
@ 2017-08-15  9:26         ` Miroslav Benes
  0 siblings, 0 replies; 15+ messages in thread
From: Miroslav Benes @ 2017-08-15  9:26 UTC (permalink / raw)
  To: Jason Baron; +Cc: linux-kernel, live-patching, jpoimboe, jeyu, jikos, pmladek

On Thu, 10 Aug 2017, Jason Baron wrote:

> 
> 
> On 08/10/2017 07:12 AM, Miroslav Benes wrote:
> > 
> > > Ok - associating the "atomic replace" with the patch itself makes sense to
> > > me.
> > > It would also basically work, I think with the patch I proposed except for
> > > the
> > > case where the the "atomic replace" was on top of several non-"atomic
> > > replace"
> > > patches. The reason is that the "atomic replace" I posted looks back 1
> > > patch
> > > to see what it needs to replace (assuming all patches are in atomic
> > > replace
> > > mode). So instead of just looking back 1 patch, it could 'look back' and
> > > make
> > > sure it was replacing all previously loaded patches.
> > 
> > Yes, this should not be a problem to implement with the current code.
> > 
> > But I must say I am not entirely happy with the code as it is. The big
> > advantage is that there are almost no changes to the consistency model.
> > Almost everything is created and allocated during the initialization and
> > the patch then looks ordinary. The current code can handle it smoothly.
> > "Dynamic" superstructure is what I am not happy with. It is too
> > complicated in my opinion. On the other hand I don't see a simple way out
> > now.
> 
> Indeed. It is possible create the 'no-op' functions statically and relative to
> a previous livepatch module at build time, but I didn't like this approach
> because it means that the created 'no-op' functions are tied to a specific
> prior livepatch module, which it is potentially also inconvenient to keep
> around. So for me, the dynamic creation of the 'no-op' functions is important.

Yes, I agree. I think the feature should be transparent for the user 
(livepatch author). Otherwise, its benefits are questionable.
 
> Given that premise, the posted patch adds an additional linked list head to
> klp_patch (to add the dynamic klp_objects), and an additional linked list head
> to the klp_object (to add the dynamic klp_funcs). It hides this mostly behind
> the  klp_for_each_object() and klp_for_each_func(), such that a lot of the
> code does not need to be adjusted. Note as well that I think in general the
> lists of 'no-ops' should be fairly small, and that as soon as the live patch
> module is enabled the linked lists are removed and thus the livepatch module
> looks the same as before (only contains the static allocations).
> 
> > 
> > 1. We can try to make your code "nicer".
> > 
> 
> Sure - I'm happy to incorporate any suggestions, and I could re-review to try
> and simplify further.
>
> 
> > 2. The main problem is that all user structures (klp_func, klp_object,
> > klp_patch) are statically allocated in a kernel module. You need to add
> > dynamically allocated structures, which is not easy. We can either change
> > everything to dynamic allocation, or make everything static. As far as the
> > former is concerned, we've been down that road IIRC. It didn't end up
> > well. Is the latter even possible?
> > 
> 
> I think making everything static is possible - but I think it ties things to
> the previously loaded module, which I don't think is desirable.

Agreed.

> That said we
> could also potentially reallocate() the memory for the static regions and
> increase them to the required size (although it may be a little tricky to free
> the original static region b/c other data structures may be stored nearby?).

Yes, that is what I meant above and I am not sure we can/should do it.

> I
> think its also nice that the current patch frees these 'no-op' functions when
> no longer needed, so by keeping them separate, this is somewhat simpler.

True.

> > 3. We could bypass everything and register special no-op fops to each
> > reverted function. Those could be held and handled separately. It could be
> > simpler, but it might be problematic to deal with the consistency model
> > interaction.
> > 
> 
> Perhaps...I sort of feel like this moves us closer to handling these
> differently, whereas the proposal hides a lot of this behind
> klp_for_each_object() and klp_for_each_func().

Ok, I'll take the second look and maybe my impression would be different. 
I'll try to come up with something.

Miroslav

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/3] livepatch: Add klp_object and klp_func iterators
  2017-07-19 17:18 ` [PATCH 1/3] livepatch: Add klp_object and klp_func iterators Jason Baron
@ 2017-08-24 14:25   ` Petr Mladek
  2017-08-24 15:39     ` Jason Baron
  0 siblings, 1 reply; 15+ messages in thread
From: Petr Mladek @ 2017-08-24 14:25 UTC (permalink / raw)
  To: Jason Baron; +Cc: linux-kernel, live-patching, jpoimboe, jeyu, jikos, mbenes

On Wed 2017-07-19 13:18:05, Jason Baron wrote:
> In preparation to introducing atomic replace, introduce iterators for klp_func
> and klp_object, such that objects and functions can be dynmically allocated
> (needed for atomic replace). Note that this patch is careful, not to grow the
> size of klp_func as that's the most common data structure. This patch is
> intended to effectively be a no-op until atomic replace is
> introduced.

We need to be careful here. The 3-level hiearachy is already complex
enough. This patch adds several asymetric things:

	+ the dynamically alocated structures are put into liked
	  lists while the static ones are in arrays

	+ the dynamically allocated func lists are linked using
	  an external struct klp_func_no_op while
	  the dynamically allocated obj lists are linked using
	  additional list_head in struct klp_object

Well, I agree that the combination of the iterators and extra no-op funcs
allow to implement the atomic replace relatively easy and transparent
way.

Another possibility would be to add some special flags to the
affected patches on the stack. But this would require hacks
in the ftrace handler, going through all patches in
klp_try_complete_transition() and other functions.
It looks more complicated.

I just wonder if we could clean this patch a bit.
Please, find some thoughts below.


> 
> diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
> index 194991e..5038337 100644
> --- a/include/linux/livepatch.h
> +++ b/include/linux/livepatch.h
> @@ -24,6 +24,7 @@
>  #include <linux/module.h>
>  #include <linux/ftrace.h>
>  #include <linux/completion.h>
> +#include <linux/list.h>
>  
>  #if IS_ENABLED(CONFIG_LIVEPATCH)
>  
> @@ -88,10 +89,23 @@ struct klp_func {
>  };
>  
>  /**
> + * struct klp_func_no_op - internal object used to link no_op functions, which
> +			   avoids the need to bloat struct klp_func
> + * @orig_func:	embeds struct klp_func
> + * @func_entry:	used link struct klp_func_no_op to struct klp_object
> + */
> +struct klp_func_no_op {
> +	struct klp_func orig_func;
> +	struct list_head func_entry;
> +};

I am not sure that this is worth it. It saves two pointers in
struct klp-func that are mostly unused. But it adds one more
asymmetry that would complicate maintaining the code.

struct klp_func is already quite big. Note that struct kobj
is hidden there. I think that we could afford one more
struct list_head there.

If people are concerned about the size, we could make this
feature configurable at build time. People using this feature
will benefit from the the ability to remove the replaced patches.

I also thought about using arrays also for the dynamic structures.
But we would need to compute the size first. It might require
another crazy code, especially when we allow to replace all
patches and need to look for duplicates.

> +/**
>   * struct klp_object - kernel object structure for live patching
>   * @name:	module name (or NULL for vmlinux)
>   * @funcs:	function entries for functions to be patched in the object
>   * @kobj:	kobject for sysfs resources
> + * @func_list:	head of list for struct klp_func_no_op
> + * @obj_entry:	used to link struct klp_object to struct klp_patch

I would prefer to make the difference between the static
and dynamic stuff more obvious. The generic names for both
variants are confusing.

I would personally use "no_op" in the names of list_heads related
to the no_op stuff. But then it would make more sense to add this
in the next patch.

Well, I do not have strong opinion about it.


>   * @mod:	kernel module associated with the patched object
>   *		(NULL for vmlinux)
>   * @patched:	the object's funcs have been added to the klp_ops list
> @@ -103,6 +117,8 @@ struct klp_object {
>  
>  	/* internal */
>  	struct kobject kobj;
> +	struct list_head func_list;
> +	struct list_head obj_entry;
>  	struct module *mod;
>  	bool patched;
>  };
> @@ -114,6 +130,7 @@ struct klp_object {
>   * @immediate:  patch all funcs immediately, bypassing safety mechanisms
>   * @list:	list node for global list of registered patches
>   * @kobj:	kobject for sysfs resources
> + * @obj_list:	head of list for dynamically allocated struct klp_object
>   * @enabled:	the patch is enabled (but operation may be incomplete)
>   * @finish:	for waiting till it is safe to remove the patch module
>   */
> @@ -126,17 +143,96 @@ struct klp_patch {
>  	/* internal */
>  	struct list_head list;
>  	struct kobject kobj;
> +	struct list_head obj_list;
>  	bool enabled;
>  	struct completion finish;
>  };
>  
> -#define klp_for_each_object(patch, obj) \
> +struct obj_iter {
> +	struct klp_object *obj;
> +	struct list_head *obj_list_head;
> +	struct list_head *obj_list_pos;
> +};
> +
> +static inline struct klp_object *obj_iter_next(struct obj_iter *iter)
> +{
> +	struct klp_object *obj;
> +
> +	if (iter->obj->funcs || iter->obj->name) {
> +		obj = iter->obj;
> +		iter->obj++;
> +	} else {
> +		if (iter->obj_list_pos == iter->obj_list_head) {
> +			obj = NULL;
> +		} else {
> +			obj = list_entry(iter->obj_list_pos, struct klp_object,
> +					 obj_entry);
> +			iter->obj_list_pos = iter->obj_list_pos->next;
> +		}
> +	}

This code might deserve some comments. Well, I wonder if we
could make it working without the struct obj_iter.

If we could somehow distinguish the statically and dynamically
allocated struct klp_object then we would know how to find the next
one. I mean something like:

static inline struct klp_object *klp_next_object(struct klp_patch *patch,
						struct klp_object *obj)
{
	struct klp_object *next_obj = NULL;

	/* Only statically defined objects has funcs array. */
	if (obj->funcs) {
		next_obj = obj + 1;
		if (next_obj)
			goto out;

		/* Is there any dynamically allocated object? */ 
		if (!list_empty(patch->obj_list)) {
			next_obj = container_of(patch->obj_list.next,
					struct klp_object,
					obj_entry);
			goto out;
	}

	/* This must be a dynamically allocated object. Is it the last one? */
	if (obj->obj_entry.next != patch->obj_list)
		next_obj = container_of(obj->obj_entry.next,
					struct klp_object,
					obj_entry);

out:
	return next_obj;
}

Note that this is not even compile tested.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/3] livepatch: Add klp_object and klp_func iterators
  2017-08-24 14:25   ` Petr Mladek
@ 2017-08-24 15:39     ` Jason Baron
  2017-08-25 11:34       ` Petr Mladek
  0 siblings, 1 reply; 15+ messages in thread
From: Jason Baron @ 2017-08-24 15:39 UTC (permalink / raw)
  To: Petr Mladek; +Cc: linux-kernel, live-patching, jpoimboe, jeyu, jikos, mbenes

Hi Petr,

On 08/24/2017 10:25 AM, Petr Mladek wrote:
> On Wed 2017-07-19 13:18:05, Jason Baron wrote:
>> In preparation to introducing atomic replace, introduce iterators for klp_func
>> and klp_object, such that objects and functions can be dynmically allocated
>> (needed for atomic replace). Note that this patch is careful, not to grow the
>> size of klp_func as that's the most common data structure. This patch is
>> intended to effectively be a no-op until atomic replace is
>> introduced.
> 
> We need to be careful here. The 3-level hiearachy is already complex
> enough. This patch adds several asymetric things:
> 
> 	+ the dynamically alocated structures are put into liked
> 	  lists while the static ones are in arrays
> 
> 	+ the dynamically allocated func lists are linked using
> 	  an external struct klp_func_no_op while
> 	  the dynamically allocated obj lists are linked using
> 	  additional list_head in struct klp_object
> 
> Well, I agree that the combination of the iterators and extra no-op funcs
> allow to implement the atomic replace relatively easy and transparent
> way.
> 
> Another possibility would be to add some special flags to the
> affected patches on the stack. But this would require hacks
> in the ftrace handler, going through all patches in
> klp_try_complete_transition() and other functions.
> It looks more complicated.
> 
> I just wonder if we could clean this patch a bit.
> Please, find some thoughts below.
> 
> 
>>
>> diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
>> index 194991e..5038337 100644
>> --- a/include/linux/livepatch.h
>> +++ b/include/linux/livepatch.h
>> @@ -24,6 +24,7 @@
>>  #include <linux/module.h>
>>  #include <linux/ftrace.h>
>>  #include <linux/completion.h>
>> +#include <linux/list.h>
>>  
>>  #if IS_ENABLED(CONFIG_LIVEPATCH)
>>  
>> @@ -88,10 +89,23 @@ struct klp_func {
>>  };
>>  
>>  /**
>> + * struct klp_func_no_op - internal object used to link no_op functions, which
>> +			   avoids the need to bloat struct klp_func
>> + * @orig_func:	embeds struct klp_func
>> + * @func_entry:	used link struct klp_func_no_op to struct klp_object
>> + */
>> +struct klp_func_no_op {
>> +	struct klp_func orig_func;
>> +	struct list_head func_entry;
>> +};
> 
> I am not sure that this is worth it. It saves two pointers in
> struct klp-func that are mostly unused. But it adds one more
> asymmetry that would complicate maintaining the code.
> 
> struct klp_func is already quite big. Note that struct kobj
> is hidden there. I think that we could afford one more
> struct list_head there.
> 
> If people are concerned about the size, we could make this
> feature configurable at build time. People using this feature
> will benefit from the the ability to remove the replaced patches.
> 
> I also thought about using arrays also for the dynamic structures.
> But we would need to compute the size first. It might require
> another crazy code, especially when we allow to replace all
> patches and need to look for duplicates.
> 
Ok, I can remove the special 'struct klp_func_no_op', I agree it adds
complexity to the code, and perhaps could be viewed as some later
optimization if deemed a size bloat.

>> +/**
>>   * struct klp_object - kernel object structure for live patching
>>   * @name:	module name (or NULL for vmlinux)
>>   * @funcs:	function entries for functions to be patched in the object
>>   * @kobj:	kobject for sysfs resources
>> + * @func_list:	head of list for struct klp_func_no_op
>> + * @obj_entry:	used to link struct klp_object to struct klp_patch
> 
> I would prefer to make the difference between the static
> and dynamic stuff more obvious. The generic names for both
> variants are confusing.
> 
> I would personally use "no_op" in the names of list_heads related
> to the no_op stuff. But then it would make more sense to add this
> in the next patch.
> 
> Well, I do not have strong opinion about it.
> 

I think I avoided 'no_op' in the naming because I thought it maybe could
be a more generic list, but in practice I'm only using it for the
dynamic noops, so I like explicitly calling it the no_ops list to make
its usage clear. I also think it can still live in 1/2 with the 'no_op'
name, but if that's not ok, I could just rename it in 2/2.

> 
>>   * @mod:	kernel module associated with the patched object
>>   *		(NULL for vmlinux)
>>   * @patched:	the object's funcs have been added to the klp_ops list
>> @@ -103,6 +117,8 @@ struct klp_object {
>>  
>>  	/* internal */
>>  	struct kobject kobj;
>> +	struct list_head func_list;
>> +	struct list_head obj_entry;
>>  	struct module *mod;
>>  	bool patched;
>>  };
>> @@ -114,6 +130,7 @@ struct klp_object {
>>   * @immediate:  patch all funcs immediately, bypassing safety mechanisms
>>   * @list:	list node for global list of registered patches
>>   * @kobj:	kobject for sysfs resources
>> + * @obj_list:	head of list for dynamically allocated struct klp_object
>>   * @enabled:	the patch is enabled (but operation may be incomplete)
>>   * @finish:	for waiting till it is safe to remove the patch module
>>   */
>> @@ -126,17 +143,96 @@ struct klp_patch {
>>  	/* internal */
>>  	struct list_head list;
>>  	struct kobject kobj;
>> +	struct list_head obj_list;
>>  	bool enabled;
>>  	struct completion finish;
>>  };
>>  
>> -#define klp_for_each_object(patch, obj) \
>> +struct obj_iter {
>> +	struct klp_object *obj;
>> +	struct list_head *obj_list_head;
>> +	struct list_head *obj_list_pos;
>> +};
>> +
>> +static inline struct klp_object *obj_iter_next(struct obj_iter *iter)
>> +{
>> +	struct klp_object *obj;
>> +
>> +	if (iter->obj->funcs || iter->obj->name) {
>> +		obj = iter->obj;
>> +		iter->obj++;
>> +	} else {
>> +		if (iter->obj_list_pos == iter->obj_list_head) {
>> +			obj = NULL;
>> +		} else {
>> +			obj = list_entry(iter->obj_list_pos, struct klp_object,
>> +					 obj_entry);
>> +			iter->obj_list_pos = iter->obj_list_pos->next;
>> +		}
>> +	}
> 
> This code might deserve some comments. Well, I wonder if we
> could make it working without the struct obj_iter.
> 
> If we could somehow distinguish the statically and dynamically
> allocated struct klp_object then we would know how to find the next
> one. I mean something like:
> 

yes, removing the obj_iter would be nice, and I think something like
below could work. Another way to distinguish the objects might be to
simply check if embedded list_head is used. That test would then also be
able to be used for the function iterator (or I guess if its a no_op
function, but I kind of like using the same test). I will try this out.

Thanks,

-Jason

> static inline struct klp_object *klp_next_object(struct klp_patch *patch,
> 						struct klp_object *obj)
> {
> 	struct klp_object *next_obj = NULL;
> 
> 	/* Only statically defined objects has funcs array. */
> 	if (obj->funcs) {
> 		next_obj = obj + 1;
> 		if (next_obj)
> 			goto out;
> 
> 		/* Is there any dynamically allocated object? */ 
> 		if (!list_empty(patch->obj_list)) {
> 			next_obj = container_of(patch->obj_list.next,
> 					struct klp_object,
> 					obj_entry);
> 			goto out;
> 	}
> 
> 	/* This must be a dynamically allocated object. Is it the last one? */
> 	if (obj->obj_entry.next != patch->obj_list)
> 		next_obj = container_of(obj->obj_entry.next,
> 					struct klp_object,
> 					obj_entry);
> 
> out:
> 	return next_obj;
> }
> 
> Note that this is not even compile tested.
> 
> Best Regards,
> Petr
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/3] livepatch: add atomic replace
  2017-07-19 17:18 ` [PATCH 2/3] livepatch: add atomic replace Jason Baron
@ 2017-08-25  9:26   ` Petr Mladek
  2017-08-25 11:53     ` Petr Mladek
  2017-08-30 21:36     ` Jason Baron
  0 siblings, 2 replies; 15+ messages in thread
From: Petr Mladek @ 2017-08-25  9:26 UTC (permalink / raw)
  To: Jason Baron; +Cc: linux-kernel, live-patching, jpoimboe, jeyu, jikos, mbenes

On Wed 2017-07-19 13:18:06, Jason Baron wrote:
> Introduce atomic replace, by first creating a 'no_op' klp_func for all
> the functions that are reverted by patch B.

> diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
> index 5038337..6fd7222 100644
> --- a/include/linux/livepatch.h
> +++ b/include/linux/livepatch.h
> @@ -132,6 +134,7 @@ struct klp_object {
>   * @kobj:	kobject for sysfs resources
>   * @obj_list:	head of list for dynamically allocated struct klp_object
>   * @enabled:	the patch is enabled (but operation may be incomplete)
> + * @replaced:	the patch has been replaced an can not be re-enabled

Is there any particular reason why the removed patch could
not longer be enabled? The user could get around this by
removing the replaced patch (rmmod) and applying it
again (insmod) but...


>   * @finish:	for waiting till it is safe to remove the patch module
>   */
>  struct klp_patch {
> @@ -201,8 +205,8 @@ static inline struct klp_func *func_iter_next(struct func_iter *iter)
>  	struct klp_func *func;
>  	struct klp_func_no_op *func_no_op;
>  
> -	if (iter->func->old_name || iter->func->new_func ||
> -					iter->func->old_sympos) {
> +	if (iter->func && (iter->func->old_name || iter->func->new_func ||
> +			   iter->func->old_sympos)) {

I guess that this check should already be in the previous patch.


>  		func = iter->func;
>  		iter->func++;
>  	} else {
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index e63f478..bf353da 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -602,6 +605,118 @@ static void klp_free_patch(struct klp_patch *patch)
> +
> +static int klp_init_patch_no_ops(struct klp_patch *patch)
> +{
> +	struct klp_object *obj, *prev_obj, *new_obj;
> +	struct klp_func *prev_func, *func;
> +	struct klp_func_no_op *new;
> +	struct klp_patch *prev_patch;
> +	struct obj_iter o_iter, prev_o_iter;
> +	struct func_iter prev_f_iter, f_iter;
> +	bool found, mod;
> +
> +	if (patch->list.prev == &klp_patches)
> +		return 0;
> +
> +	prev_patch = list_prev_entry(patch, list);
> +	klp_for_each_object(prev_patch, prev_obj, &prev_o_iter) {
> +		if (!klp_is_object_loaded(prev_obj))
> +			continue;

We must create the no_op object/function structures also for
no-yet-loaded modules. The transaction might take a long time.
The module might be loaded in the meantime. It must be able
to enable the stack of patches completely, including
the no_op ones, see klp_module_coming().

Note that the module must be patched even temporarily
because some (non-migrated) processes might still use
the semantic required by the to-be-removed patches.

This is actually one big advantage of the dynamically
allocated no_op functions. The infrastructure for
the coming/going modules should just work. The
same is true for klp_reverse_transition().


> +
> +		klp_for_each_func(prev_obj, prev_func, &prev_f_iter) {
> +			found = false;
> +			klp_for_each_object(patch, obj, &o_iter) {
> +				klp_for_each_func(obj, func, &f_iter) {
> +					if ((strcmp(prev_func->old_name,
> +						    func->old_name) == 0) &&
> +						(prev_func->old_sympos ==
> +							func->old_sympos)) {
> +						found = true;
> +						break;
> +					}
> +				}
> +				if (found)
> +					break;
> +			}
> +			if (found)
> +				continue;
> +
> +			new = kmalloc(sizeof(*new), GFP_KERNEL);
> +			if (!new)
> +				return -ENOMEM;
> +			new->orig_func = *prev_func;
> +			new->orig_func.old_name = prev_func->old_name;
> +			new->orig_func.new_func = NULL;

This is OK if the operation replaces all older patches. Otherwise,
you would need to store here the address from the patch down the stack.


> +			new->orig_func.old_sympos = prev_func->old_sympos;
> +			new->orig_func.immediate = prev_func->immediate;
> +			new->orig_func.old_addr = prev_func->old_addr;

Hmm, this should be

			new->orig_func.old_addr = prev_func->new_func;

klp_check_stack_func() should check the address of the old function
that is currently running. It is the variant of the function that
is on top of stack.

I think that we actually have bug in the current livepatch code
because old_addr always points to the original function!!!
I am going to look at it.


> +			INIT_LIST_HEAD(&new->orig_func.stack_node);
> +			new->orig_func.old_size = prev_func->old_size;

Same here. It should be the size of the currently used
function implementation (from the last active patch).


> +			new->orig_func.new_size = 0;
> +			new->orig_func.no_op = true;
> +			new->orig_func.patched = false;
> +			new->orig_func.transition = false;
> +			found = false;
> +			mod = klp_is_module(prev_obj);
> +			klp_for_each_object(patch, obj, &o_iter) {
> +				if (mod) {
> +					if (klp_is_module(obj) &&
> +					    strcmp(prev_obj->name,
> +						   obj->name) == 0) {
> +						found = true;
> +						break;
> +					}
> +				} else if (!klp_is_module(obj)) {
> +					found = true;
> +					break;
> +				}
> +			}
> +			if (found) {
> +				list_add(&new->func_entry, &obj->func_list);
> +			} else {
> +				new_obj = kmalloc(sizeof(*new_obj), GFP_KERNEL);
> +				if (!new_obj)
> +					return -ENOMEM;
> +				new_obj->name = prev_obj->name;
> +				new_obj->funcs = NULL;
> +				new_obj->mod = prev_obj->mod;

The new temporary object should be connected with the new patch.
I mean with patch->mod.

Another question is what to do with the kobjects in the dynamically
allocated klp_object and klp_func. I do not have strong opinion here.

One thing is that the kobject hierarchy allows to find *patch from
*obj and *obj from *func pointers. I guess that we do not rely on
this in the current livepatch code. But it is a nice trick that
we use in Kgraft.

Also userspace might use the related /sys/ hierarchy to monitor
the livepatching situation. If we do not create the kobjects,
we will hide some information.

Finally, please try to split this function into more pieces.
I wonder if we could separate the code for each level
(patch, object, func).


> +				new_obj->patched = false;
> +				INIT_LIST_HEAD(&new_obj->func_list);
> +				INIT_LIST_HEAD(&new_obj->obj_entry);
> +				list_add(&new->func_entry, &new_obj->func_list);
> +				list_add(&new_obj->obj_entry, &patch->obj_list);
> +			}
> +		}
> +	}
> +
> +	return 0;
> +}
> +
>  static int klp_init_func(struct klp_object *obj, struct klp_func *func)
>  {
>  	if (!func->old_name || !func->new_func)

[...]

> diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
> index 1cfdabc..cbb8b9d 100644
> --- a/kernel/livepatch/patch.c
> +++ b/kernel/livepatch/patch.c
> @@ -117,6 +117,8 @@ static void notrace klp_ftrace_handler(unsigned long ip,
>  		}
>  	}
>  
> +	if (func->no_op)
> +		goto unlock;

JFYI, this means that we will use the original (unpatched) code.
It is perfectly fine if we new patch replaces all older ones.
It is wrong in the current concept where you want to replace
only the last patch on the stack. See above my comments about
orig_func.new_func.

To make it clear, I am all for replacing all patches on the stack.
IMHO, it is much more practical. The result is well defined.

If you remove only the last patch, the result depends on
the other patches on the stack which is harder to predict.
IMHO, replacing all patches is a better semantic.


>  	klp_arch_set_pc(regs, (unsigned long)func->new_func);
>  unlock:
>  	preempt_enable_notrace();
> @@ -135,7 +137,7 @@ static unsigned long klp_get_ftrace_location(unsigned long faddr)
>  }
>  #endif
>  
> -static void klp_unpatch_func(struct klp_func *func)
> +void klp_unpatch_func(struct klp_func *func, bool unregistered)
>  {
>  	struct klp_ops *ops;
>  
> @@ -155,9 +157,11 @@ static void klp_unpatch_func(struct klp_func *func)
>  		if (WARN_ON(!ftrace_loc))
>  			return;
>  
> -		WARN_ON(unregister_ftrace_function(&ops->fops));
> -		WARN_ON(ftrace_set_filter_ip(&ops->fops, ftrace_loc, 1, 0));
> -
> +		if (!unregistered) {
> +			WARN_ON(unregister_ftrace_function(&ops->fops));
> +			WARN_ON(ftrace_set_filter_ip(&ops->fops, ftrace_loc, 1,
> +				0));
> +		}
>  		list_del_rcu(&func->stack_node);

IMHO, we should rather unpatch the replaced patches before
we remove the temporary no_op stuff. Then this should not
be needed, see below.


>  		list_del(&ops->node);
>  		kfree(ops);
> diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
> index e112826..43e1609 100644
> --- a/kernel/livepatch/transition.c
> +++ b/kernel/livepatch/transition.c
> @@ -70,6 +72,7 @@ static void klp_synchronize_transition(void)
>  	schedule_on_each_cpu(klp_sync);
>  }
>  
> +
>  /*
>   * The transition to the target patch state is complete.  Clean up the data
>   * structures.
> @@ -81,8 +84,32 @@ static void klp_complete_transition(void)
>  	struct task_struct *g, *task;
>  	unsigned int cpu;
>  	bool immediate_func = false;
> +	bool no_op = false;
>  	struct obj_iter o_iter;
>  	struct func_iter f_iter;
> +	unsigned long ftrace_loc;
> +	struct klp_ops *ops;
> +	struct klp_patch *prev_patch;
> +
> +	/* remove ftrace hook for all no_op functions. */
> +	if (klp_target_state == KLP_PATCHED) {

IMHO, this is the right place to call:

		klp_unpatch_objects(prev_patch);

It will just remove the function from the func_stack(s). All stacks
have at least two functions: the replaced one and one from the
new patch (real or no_op). It should keep the one from the new
patch on top of the stack.


> +		klp_for_each_object(klp_transition_patch, obj, &o_iter) {
> +			klp_for_each_func(obj, func, &f_iter) {
> +				if (!func->no_op)
> +					continue;
> +
> +				ops = klp_find_ops(func->old_addr);
> +				if (WARN_ON(!ops))
> +					continue;
> +				ftrace_loc = func->old_addr;
> +				WARN_ON(unregister_ftrace_function(&ops->fops));
> +				WARN_ON(ftrace_set_filter_ip(&ops->fops,
> +							     ftrace_loc,
> +							     1, 0));
> +				no_op = true;

I wonder if we could reuse klp_unpatch_object() and
klp_unpatch_function() here. We could add a no_op parameter
to unpatch only the no_op stuff.


> +			}
> +		}
> +	}
>  
>  	if (klp_target_state == KLP_UNPATCHED) {
>  		/*
> @@ -132,6 +161,24 @@ static void klp_complete_transition(void)
>  	}
>  
>  done:
> +	/* remove and free any no_op functions */
> +	if (no_op && klp_target_state == KLP_PATCHED) {
> +		prev_patch = list_prev_entry(klp_transition_patch, list);
> +		if (prev_patch->enabled) {
> +			klp_unpatch_objects(prev_patch);
> +			prev_patch->enabled = false;

Hmm, the patch is normally marked disabled before the transition
starts, see __klp_disable_patch().

We should do this in __klp_enable_patch(). It needs to be handled
also in klp_reverse_transition().


> +			prev_patch->replaced = true;
> +			module_put(prev_patch->mod);
> +		}
> +		klp_for_each_object(klp_transition_patch, obj, &o_iter) {
> +			klp_for_each_func(obj, func, &f_iter) {
> +				if (func->no_op)
> +					klp_unpatch_func(func, true);
> +			}
> +		}

This should be done earlier. See above the hint about adding
no_op parameter to klp_unpatch_object() and klp_uptach_func()
and reusing their code.

My intention is that we should ideally do the same actions in
the same locations as we already do for normal patches.
I mean manipulating the various flags, ftrace handlers, and
other per-patch/object/func values. It should help to simply
review and even reuse some more code.


> +		klp_patch_free_no_ops(klp_transition_patch);
> +	}
> +
>  	klp_target_state = KLP_UNDEFINED;
>  	klp_transition_patch = NULL;
>  }
> @@ -204,10 +251,18 @@ static int klp_check_stack_func(struct klp_func *func,
>  		if (klp_target_state == KLP_UNPATCHED) {
>  			 /*
>  			  * Check for the to-be-unpatched function
> -			  * (the func itself).
> +			  * (the func itself). If we're unpatching
> +			  * a no-op, then we're running the original
> +			  * function. We never 'patch' a no-op function,
> +			  * since we just remove the ftrace hook.
>  			  */
> -			func_addr = (unsigned long)func->new_func;
> -			func_size = func->new_size;
> +			if (func->no_op) {
> +				func_addr = (unsigned long)func->old_addr;
> +				func_size = func->old_size;
> +			} else {
> +				func_addr = (unsigned long)func->new_func;
> +				func_size = func->new_size;

This looks wrong. It should not be needed if we set
new_func and new_addr correctly in klp_init_patch_no_ops().

Thanks a lot for working on this feature. You did great job.
It seems that it was functional. Most of my hints should just
help to better integrate it into the existing code.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/3] livepatch: Add klp_object and klp_func iterators
  2017-08-24 15:39     ` Jason Baron
@ 2017-08-25 11:34       ` Petr Mladek
  0 siblings, 0 replies; 15+ messages in thread
From: Petr Mladek @ 2017-08-25 11:34 UTC (permalink / raw)
  To: Jason Baron; +Cc: linux-kernel, live-patching, jpoimboe, jeyu, jikos, mbenes

On Thu 2017-08-24 11:39:08, Jason Baron wrote:
> On 08/24/2017 10:25 AM, Petr Mladek wrote:
> > On Wed 2017-07-19 13:18:05, Jason Baron wrote:
> >> +/**
> >>   * struct klp_object - kernel object structure for live patching
> >>   * @name:	module name (or NULL for vmlinux)
> >>   * @funcs:	function entries for functions to be patched in the object
> >>   * @kobj:	kobject for sysfs resources
> >> + * @func_list:	head of list for struct klp_func_no_op
> >> + * @obj_entry:	used to link struct klp_object to struct klp_patch
> > 
> > I would prefer to make the difference between the static
> > and dynamic stuff more obvious. The generic names for both
> > variants are confusing.
> > 
> > I would personally use "no_op" in the names of list_heads related
> > to the no_op stuff. But then it would make more sense to add this
> > in the next patch.
> > 
> > Well, I do not have strong opinion about it.
> > 
> 
> I think I avoided 'no_op' in the naming because I thought it maybe could
> be a more generic list, but in practice I'm only using it for the
> dynamic noops, so I like explicitly calling it the no_ops list to make
> its usage clear. I also think it can still live in 1/2 with the 'no_op'
> name, but if that's not ok, I could just rename it in 2/2.

You could do only the needed preparation steps for the more
complicated iterator in this patch. It might still handle
only the static arrays. Then it might be extended in the 2nd patch
together with adding the extra lists.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/3] livepatch: add atomic replace
  2017-08-25  9:26   ` Petr Mladek
@ 2017-08-25 11:53     ` Petr Mladek
  2017-08-30 21:36     ` Jason Baron
  1 sibling, 0 replies; 15+ messages in thread
From: Petr Mladek @ 2017-08-25 11:53 UTC (permalink / raw)
  To: Jason Baron; +Cc: linux-kernel, live-patching, jpoimboe, jeyu, jikos, mbenes

On Fri 2017-08-25 11:26:23, Petr Mladek wrote:
> On Wed 2017-07-19 13:18:06, Jason Baron wrote:
> > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> > index e63f478..bf353da 100644
> > --- a/kernel/livepatch/core.c
> > +++ b/kernel/livepatch/core.c
> > @@ -602,6 +605,118 @@ static void klp_free_patch(struct klp_patch *patch)
> > +
> > +static int klp_init_patch_no_ops(struct klp_patch *patch)
> > +{
> > +	struct klp_object *obj, *prev_obj, *new_obj;
> > +	struct klp_func *prev_func, *func;
> > +	struct klp_func_no_op *new;
> > +	struct klp_patch *prev_patch;
> > +	struct obj_iter o_iter, prev_o_iter;
> > +	struct func_iter prev_f_iter, f_iter;
> > +	bool found, mod;
> > +
> > +	if (patch->list.prev == &klp_patches)
> > +		return 0;
> > +
> > +	prev_patch = list_prev_entry(patch, list);
> > +	klp_for_each_object(prev_patch, prev_obj, &prev_o_iter) {
> > +		if (!klp_is_object_loaded(prev_obj))
> > +			continue;
> > +
> > +		klp_for_each_func(prev_obj, prev_func, &prev_f_iter) {
> > +			found = false;
> > +			klp_for_each_object(patch, obj, &o_iter) {
> > +				klp_for_each_func(obj, func, &f_iter) {
> > +					if ((strcmp(prev_func->old_name,
> > +						    func->old_name) == 0) &&
> > +						(prev_func->old_sympos ==
> > +							func->old_sympos)) {
> > +						found = true;
> > +						break;
> > +					}
> > +				}
> > +				if (found)
> > +					break;
> > +			}
> > +			if (found)
> > +				continue;
> > +
> > +			new = kmalloc(sizeof(*new), GFP_KERNEL);
> > +			if (!new)
> > +				return -ENOMEM;
> > +			new->orig_func = *prev_func;
> > +			new->orig_func.old_name = prev_func->old_name;
> > +			new->orig_func.new_func = NULL;
> 
> This is OK if the operation replaces all older patches. Otherwise,
> you would need to store here the address from the patch down the stack.
> 
> 
> > +			new->orig_func.old_sympos = prev_func->old_sympos;
> > +			new->orig_func.immediate = prev_func->immediate;
> > +			new->orig_func.old_addr = prev_func->old_addr;
> 
> Hmm, this should be
> 
> 			new->orig_func.old_addr = prev_func->new_func;
> 
> klp_check_stack_func() should check the address of the old function
> that is currently running. It is the variant of the function that
> is on top of stack.
>
> I think that we actually have bug in the current livepatch code
> because old_addr always points to the original function!!!
> I am going to look at it.

I take this back. old_addr and old_size points to the original
(unpatched) code. The values are the same in all patches
for the same function. klp_check_stack_func() use this information
only when there is only one patch on the func_stack. Otherwise,
it checks new_func, new_size from the previous patch on the stack.

By other words, you assign old_addr and old_size correctly.
Also the livepatch handle this correctly. Ufff :-)

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/3] livepatch: add atomic replace
  2017-08-25  9:26   ` Petr Mladek
  2017-08-25 11:53     ` Petr Mladek
@ 2017-08-30 21:36     ` Jason Baron
  1 sibling, 0 replies; 15+ messages in thread
From: Jason Baron @ 2017-08-30 21:36 UTC (permalink / raw)
  To: Petr Mladek; +Cc: linux-kernel, live-patching, jpoimboe, jeyu, jikos, mbenes

On 08/25/2017 05:26 AM, Petr Mladek wrote:
> On Wed 2017-07-19 13:18:06, Jason Baron wrote:
>> Introduce atomic replace, by first creating a 'no_op' klp_func for all
>> the functions that are reverted by patch B.
> 
>> diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
>> index 5038337..6fd7222 100644
>> --- a/include/linux/livepatch.h
>> +++ b/include/linux/livepatch.h
>> @@ -132,6 +134,7 @@ struct klp_object {
>>   * @kobj:	kobject for sysfs resources
>>   * @obj_list:	head of list for dynamically allocated struct klp_object
>>   * @enabled:	the patch is enabled (but operation may be incomplete)
>> + * @replaced:	the patch has been replaced an can not be re-enabled
> 
> Is there any particular reason why the removed patch could
> not longer be enabled? The user could get around this by
> removing the replaced patch (rmmod) and applying it
> again (insmod) but...
>

In the current model patches are enabled and disabled in a 'stacked'
manner. So if the topmost patch is enabled then all previous patches are
also enabled. This atomic replace feature doesn't fit this model as it
disables all previous patches, since its replacing all patches that came
before it.

We could allow the previous patches to be re-enabled, but if they are
'replace' patches they would have to re-calculate the no-op patches
based on the last enabled patch. This breaks the 'stacked' model and
seemed to me to add extra complexity without really adding any new
functionality. As you pointed out, and I have verified in testing, the
model with replace is to effectively load the new module, and if you
want to go back to an older one you rmmod*( it and insmod() it again,
and you thus effectively only have 1 patch (except during transitions)
loaded at a time. So I feel like disabling all previous patches makes
this operation model explicit.

> 
>>   * @finish:	for waiting till it is safe to remove the patch module
>>   */
>>  struct klp_patch {
>> @@ -201,8 +205,8 @@ static inline struct klp_func *func_iter_next(struct func_iter *iter)
>>  	struct klp_func *func;
>>  	struct klp_func_no_op *func_no_op;
>>  
>> -	if (iter->func->old_name || iter->func->new_func ||
>> -					iter->func->old_sympos) {
>> +	if (iter->func && (iter->func->old_name || iter->func->new_func ||
>> +			   iter->func->old_sympos)) {
> 
> I guess that this check should already be in the previous patch.
> 

I've removed this check now based on your iterator suggestions. I've
updated the series and will send out v2 shortly.

> 
>>  		func = iter->func;
>>  		iter->func++;
>>  	} else {
>> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
>> index e63f478..bf353da 100644
>> --- a/kernel/livepatch/core.c
>> +++ b/kernel/livepatch/core.c
>> @@ -602,6 +605,118 @@ static void klp_free_patch(struct klp_patch *patch)
>> +
>> +static int klp_init_patch_no_ops(struct klp_patch *patch)
>> +{
>> +	struct klp_object *obj, *prev_obj, *new_obj;
>> +	struct klp_func *prev_func, *func;
>> +	struct klp_func_no_op *new;
>> +	struct klp_patch *prev_patch;
>> +	struct obj_iter o_iter, prev_o_iter;
>> +	struct func_iter prev_f_iter, f_iter;
>> +	bool found, mod;
>> +
>> +	if (patch->list.prev == &klp_patches)
>> +		return 0;
>> +
>> +	prev_patch = list_prev_entry(patch, list);
>> +	klp_for_each_object(prev_patch, prev_obj, &prev_o_iter) {
>> +		if (!klp_is_object_loaded(prev_obj))
>> +			continue;
> 
> We must create the no_op object/function structures also for
> no-yet-loaded modules. The transaction might take a long time.
> The module might be loaded in the meantime. It must be able
> to enable the stack of patches completely, including
> the no_op ones, see klp_module_coming().
> 
> Note that the module must be patched even temporarily
> because some (non-migrated) processes might still use
> the semantic required by the to-be-removed patches.
> 
> This is actually one big advantage of the dynamically
> allocated no_op functions. The infrastructure for
> the coming/going modules should just work. The
> same is true for klp_reverse_transition().

Agreed.

> 
> 
>> +
>> +		klp_for_each_func(prev_obj, prev_func, &prev_f_iter) {
>> +			found = false;
>> +			klp_for_each_object(patch, obj, &o_iter) {
>> +				klp_for_each_func(obj, func, &f_iter) {
>> +					if ((strcmp(prev_func->old_name,
>> +						    func->old_name) == 0) &&
>> +						(prev_func->old_sympos ==
>> +							func->old_sympos)) {
>> +						found = true;
>> +						break;
>> +					}
>> +				}
>> +				if (found)
>> +					break;
>> +			}
>> +			if (found)
>> +				continue;
>> +
>> +			new = kmalloc(sizeof(*new), GFP_KERNEL);
>> +			if (!new)
>> +				return -ENOMEM;
>> +			new->orig_func = *prev_func;
>> +			new->orig_func.old_name = prev_func->old_name;
>> +			new->orig_func.new_func = NULL;
> 
> This is OK if the operation replaces all older patches. Otherwise,
> you would need to store here the address from the patch down the stack.
> 

Yes, the intention is to replace all older patches.

> 
>> +			new->orig_func.old_sympos = prev_func->old_sympos;
>> +			new->orig_func.immediate = prev_func->immediate;
>> +			new->orig_func.old_addr = prev_func->old_addr;
> 
> Hmm, this should be
> 
> 			new->orig_func.old_addr = prev_func->new_func;
> 
> klp_check_stack_func() should check the address of the old function
> that is currently running. It is the variant of the function that
> is on top of stack.
> 
> I think that we actually have bug in the current livepatch code
> because old_addr always points to the original function!!!
> I am going to look at it.
> 
> 
>> +			INIT_LIST_HEAD(&new->orig_func.stack_node);
>> +			new->orig_func.old_size = prev_func->old_size;
> 
> Same here. It should be the size of the currently used
> function implementation (from the last active patch).
> 

I think your follow-up mail here pointed out this is fine as is.

> 
>> +			new->orig_func.new_size = 0;
>> +			new->orig_func.no_op = true;
>> +			new->orig_func.patched = false;
>> +			new->orig_func.transition = false;
>> +			found = false;
>> +			mod = klp_is_module(prev_obj);
>> +			klp_for_each_object(patch, obj, &o_iter) {
>> +				if (mod) {
>> +					if (klp_is_module(obj) &&
>> +					    strcmp(prev_obj->name,
>> +						   obj->name) == 0) {
>> +						found = true;
>> +						break;
>> +					}
>> +				} else if (!klp_is_module(obj)) {
>> +					found = true;
>> +					break;
>> +				}
>> +			}
>> +			if (found) {
>> +				list_add(&new->func_entry, &obj->func_list);
>> +			} else {
>> +				new_obj = kmalloc(sizeof(*new_obj), GFP_KERNEL);
>> +				if (!new_obj)
>> +					return -ENOMEM;
>> +				new_obj->name = prev_obj->name;
>> +				new_obj->funcs = NULL;
>> +				new_obj->mod = prev_obj->mod;
> 
> The new temporary object should be connected with the new patch.
> I mean with patch->mod.
> 
> Another question is what to do with the kobjects in the dynamically
> allocated klp_object and klp_func. I do not have strong opinion here.
> 
> One thing is that the kobject hierarchy allows to find *patch from
> *obj and *obj from *func pointers. I guess that we do not rely on
> this in the current livepatch code. But it is a nice trick that
> we use in Kgraft.
> 
> Also userspace might use the related /sys/ hierarchy to monitor
> the livepatching situation. If we do not create the kobjects,
> we will hide some information.
> 

I've added the kobj initialization stuff in v2.

> Finally, please try to split this function into more pieces.
> I wonder if we could separate the code for each level
> (patch, object, func).

I'm not making use of the klp_init_object() and klp_init_func() there...


> 
> 
>> +				new_obj->patched = false;
>> +				INIT_LIST_HEAD(&new_obj->func_list);
>> +				INIT_LIST_HEAD(&new_obj->obj_entry);
>> +				list_add(&new->func_entry, &new_obj->func_list);
>> +				list_add(&new_obj->obj_entry, &patch->obj_list);
>> +			}
>> +		}
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>  static int klp_init_func(struct klp_object *obj, struct klp_func *func)
>>  {
>>  	if (!func->old_name || !func->new_func)
> 
> [...]
> 
>> diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
>> index 1cfdabc..cbb8b9d 100644
>> --- a/kernel/livepatch/patch.c
>> +++ b/kernel/livepatch/patch.c
>> @@ -117,6 +117,8 @@ static void notrace klp_ftrace_handler(unsigned long ip,
>>  		}
>>  	}
>>  
>> +	if (func->no_op)
>> +		goto unlock;
> 
> JFYI, this means that we will use the original (unpatched) code.
> It is perfectly fine if we new patch replaces all older ones.
> It is wrong in the current concept where you want to replace
> only the last patch on the stack. See above my comments about
> orig_func.new_func.
> 
> To make it clear, I am all for replacing all patches on the stack.
> IMHO, it is much more practical. The result is well defined.
> 
> If you remove only the last patch, the result depends on
> the other patches on the stack which is harder to predict.
> IMHO, replacing all patches is a better semantic.

Agreed. I've updated the semantic such that a 'replace' patch replaces
all previous patches on the stack. It walks the patch list backwards
until it finds the previous 'replace' patch (since that replaced
everything before itself).

> 
> 
>>  	klp_arch_set_pc(regs, (unsigned long)func->new_func);
>>  unlock:
>>  	preempt_enable_notrace();
>> @@ -135,7 +137,7 @@ static unsigned long klp_get_ftrace_location(unsigned long faddr)
>>  }
>>  #endif
>>  
>> -static void klp_unpatch_func(struct klp_func *func)
>> +void klp_unpatch_func(struct klp_func *func, bool unregistered)
>>  {
>>  	struct klp_ops *ops;
>>  
>> @@ -155,9 +157,11 @@ static void klp_unpatch_func(struct klp_func *func)
>>  		if (WARN_ON(!ftrace_loc))
>>  			return;
>>  
>> -		WARN_ON(unregister_ftrace_function(&ops->fops));
>> -		WARN_ON(ftrace_set_filter_ip(&ops->fops, ftrace_loc, 1, 0));
>> -
>> +		if (!unregistered) {
>> +			WARN_ON(unregister_ftrace_function(&ops->fops));
>> +			WARN_ON(ftrace_set_filter_ip(&ops->fops, ftrace_loc, 1,
>> +				0));
>> +		}
>>  		list_del_rcu(&func->stack_node);
> 
> IMHO, we should rather unpatch the replaced patches before
> we remove the temporary no_op stuff. Then this should not
> be needed, see below.
> 

Ok, make sense. I've added this to v2 patches.


> 
>>  		list_del(&ops->node);
>>  		kfree(ops);
>> diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
>> index e112826..43e1609 100644
>> --- a/kernel/livepatch/transition.c
>> +++ b/kernel/livepatch/transition.c
>> @@ -70,6 +72,7 @@ static void klp_synchronize_transition(void)
>>  	schedule_on_each_cpu(klp_sync);
>>  }
>>  
>> +
>>  /*
>>   * The transition to the target patch state is complete.  Clean up the data
>>   * structures.
>> @@ -81,8 +84,32 @@ static void klp_complete_transition(void)
>>  	struct task_struct *g, *task;
>>  	unsigned int cpu;
>>  	bool immediate_func = false;
>> +	bool no_op = false;
>>  	struct obj_iter o_iter;
>>  	struct func_iter f_iter;
>> +	unsigned long ftrace_loc;
>> +	struct klp_ops *ops;
>> +	struct klp_patch *prev_patch;
>> +
>> +	/* remove ftrace hook for all no_op functions. */
>> +	if (klp_target_state == KLP_PATCHED) {
> 
> IMHO, this is the right place to call:
> 
> 		klp_unpatch_objects(prev_patch);
> 
> It will just remove the function from the func_stack(s). All stacks
> have at least two functions: the replaced one and one from the
> new patch (real or no_op). It should keep the one from the new
> patch on top of the stack.
> 
> 
>> +		klp_for_each_object(klp_transition_patch, obj, &o_iter) {
>> +			klp_for_each_func(obj, func, &f_iter) {
>> +				if (!func->no_op)
>> +					continue;
>> +
>> +				ops = klp_find_ops(func->old_addr);
>> +				if (WARN_ON(!ops))
>> +					continue;
>> +				ftrace_loc = func->old_addr;
>> +				WARN_ON(unregister_ftrace_function(&ops->fops));
>> +				WARN_ON(ftrace_set_filter_ip(&ops->fops,
>> +							     ftrace_loc,
>> +							     1, 0));
>> +				no_op = true;
> 
> I wonder if we could reuse klp_unpatch_object() and
> klp_unpatch_function() here. We could add a no_op parameter
> to unpatch only the no_op stuff.
> 

Ok, added in v2.

> 
>> +			}
>> +		}
>> +	}
>>  
>>  	if (klp_target_state == KLP_UNPATCHED) {
>>  		/*
>> @@ -132,6 +161,24 @@ static void klp_complete_transition(void)
>>  	}
>>  
>>  done:
>> +	/* remove and free any no_op functions */
>> +	if (no_op && klp_target_state == KLP_PATCHED) {
>> +		prev_patch = list_prev_entry(klp_transition_patch, list);
>> +		if (prev_patch->enabled) {
>> +			klp_unpatch_objects(prev_patch);
>> +			prev_patch->enabled = false;
> 
> Hmm, the patch is normally marked disabled before the transition
> starts, see __klp_disable_patch().
> 

I was treating this a special case for atomic replace where it disables
all previous patches, so its not in the usual __klp_disable_patch() spot.

> We should do this in __klp_enable_patch(). It needs to be handled
> also in klp_reverse_transition().
>

Can you clarify this a bit more for me?

> 
>> +			prev_patch->replaced = true;
>> +			module_put(prev_patch->mod);
>> +		}
>> +		klp_for_each_object(klp_transition_patch, obj, &o_iter) {
>> +			klp_for_each_func(obj, func, &f_iter) {
>> +				if (func->no_op)
>> +					klp_unpatch_func(func, true);
>> +			}
>> +		}
> 
> This should be done earlier. See above the hint about adding
> no_op parameter to klp_unpatch_object() and klp_uptach_func()
> and reusing their code.
> 
> My intention is that we should ideally do the same actions in
> the same locations as we already do for normal patches.
> I mean manipulating the various flags, ftrace handlers, and
> other per-patch/object/func values. It should help to simply
> review and even reuse some more code.
>

Ok, I've tried to address this in v2.


> 
>> +		klp_patch_free_no_ops(klp_transition_patch);
>> +	}
>> +
>>  	klp_target_state = KLP_UNDEFINED;
>>  	klp_transition_patch = NULL;
>>  }
>> @@ -204,10 +251,18 @@ static int klp_check_stack_func(struct klp_func *func,
>>  		if (klp_target_state == KLP_UNPATCHED) {
>>  			 /*
>>  			  * Check for the to-be-unpatched function
>> -			  * (the func itself).
>> +			  * (the func itself). If we're unpatching
>> +			  * a no-op, then we're running the original
>> +			  * function. We never 'patch' a no-op function,
>> +			  * since we just remove the ftrace hook.
>>  			  */
>> -			func_addr = (unsigned long)func->new_func;
>> -			func_size = func->new_size;
>> +			if (func->no_op) {
>> +				func_addr = (unsigned long)func->old_addr;
>> +				func_size = func->old_size;
>> +			} else {
>> +				func_addr = (unsigned long)func->new_func;
>> +				func_size = func->new_size;
> 
> This looks wrong. It should not be needed if we set
> new_func and new_addr correctly in klp_init_patch_no_ops().
> 

when were patching to the no_op yes the previous was what we want.
However, if we are in the transition phase then the no_op means that we
have the original function on the stack, not the new one?

Thanks for the review. As mentioned I will send out a v2 shortly.

Thanks,

-Jason

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-08-30 21:37 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-19 17:18 [PATCH 0/3] livepatch: introduce atomic replace Jason Baron
2017-07-19 17:18 ` [PATCH 1/3] livepatch: Add klp_object and klp_func iterators Jason Baron
2017-08-24 14:25   ` Petr Mladek
2017-08-24 15:39     ` Jason Baron
2017-08-25 11:34       ` Petr Mladek
2017-07-19 17:18 ` [PATCH 2/3] livepatch: add atomic replace Jason Baron
2017-08-25  9:26   ` Petr Mladek
2017-08-25 11:53     ` Petr Mladek
2017-08-30 21:36     ` Jason Baron
2017-07-19 17:18 ` [PATCH 3/3] livepatch: Add a sysctl livepatch_mode for " Jason Baron
2017-07-21 13:06 ` [PATCH 0/3] livepatch: introduce " Miroslav Benes
2017-07-21 17:56   ` Jason Baron
2017-08-10 11:12     ` Miroslav Benes
2017-08-10 14:56       ` Jason Baron
2017-08-15  9:26         ` Miroslav Benes

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.