From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752155AbdFOI5z (ORCPT ); Thu, 15 Jun 2017 04:57:55 -0400 Received: from mx2.suse.de ([195.135.220.15]:47398 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752051AbdFOI5y (ORCPT ); Thu, 15 Jun 2017 04:57:54 -0400 Date: Thu, 15 Jun 2017 10:57:46 +0200 (CEST) From: Miroslav Benes To: Petr Mladek cc: Josh Poimboeuf , Jessica Yu , Jiri Kosina , Steven Rostedt , "Paul E. McKenney" , Ingo Molnar , Peter Zijlstra , Thomas Gleixner , live-patching@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] livepatch/rcu: Fix stacking of patches when RCU infrastructure is patched In-Reply-To: <1497430492-3507-1-git-send-email-pmladek@suse.com> Message-ID: References: <1497430492-3507-1-git-send-email-pmladek@suse.com> User-Agent: Alpine 2.20 (LSU 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 14 Jun 2017, Petr Mladek wrote: > rcu_read_(un)lock(), list_*_rcu(), and synchronize_rcu() are used for > a secure access and manipulation of the list of patches that modify > the same function. In particular, it is the variable func_stack that > is accessible from the ftrace handler via struct ftrace_ops and klp_ops. > > Of course, it synchronizes also some states of the patch on the top > of the stack, e.g. func->transition in klp_ftrace_handler. > > At the same time, this mechanism guards also the manipulation > of task->patch_state. It is modified according to the state of > the transition and the state of the process. > > Now, all this works well as long as RCU works well. Sadly livepatching > might get into some corner cases when this is not true. For example, > RCU is not watching when rcu_read_lock() is taken in idle threads. > It is because they might sleep and prevent reaching the grace period > for too long. > > There are ways how to make RCU watching even in idle threads, > see rcu_irq_enter(). But there is a small location inside RCU > infrastructure when even this does not work. > > This small problematic location can be detected either before > calling rcu_irq_enter() by rcu_irq_enter_disabled() or later by > rcu_is_watching(). Sadly, there is no safe way how to handle it. > Once we detect that RCU was not watching, we might see inconsistent > state of the function stack and the related variables in > klp_ftrace_handler(). Then we could do a wrong decision, > use an incompatible implementation of the function and > break the consistency of the system. We could warn but > we could not avoid the damage. > > Fortunately, ftrace has similar problems and they seem to > be solved well there. It uses a heavy weight implementation > of some RCU operations. In particular, it replaces: > > + rcu_read_lock() with preempt_disable_notrace() > + rcu_read_unlock() with preempt_enable_notrace() > + synchronize_rcu() with schedule_on_each_cpu(sync_work) > > My understanding is that this is RCU implementation from > a stone age. It meets the core RCU requirements but it is > rather ineffective. Especially, it does not allow to batch > or speed up the synchronize calls. > > On the other hand, it is very trivial. It allows to safely > trace and/or livepatch even the RCU core infrastructure. > And the effectiveness is a not a big issue because using ftrace > or livepatches on productive systems is a rare operation. > The safety is much more important than a negligible extra > load. > > Note that the alternative implementation follows the RCU > principles. Therefore, we could and actually must use > list_*_rcu() variants when manipulating the func_stack. > These functions allow to access the pointers in > the right order and with the right barriers. But they > do not use any other information that would be set > only by rcu_read_lock(). > > Also note that there are actually two problems solved in ftrace: > > First, it cares about the consistency of RCU read sections. > It is being solved the way as described and used in this patch. > > Second, ftrace needs to make sure that nobody is inside > the dynamic trampoline when it is being freed. For this, it also > calls synchronize_rcu_tasks() in preemptive kernel in > ftrace_shutdown(). > > Livepatch has similar problem but it is solved by ftrace for free. > klp_ftrace_handler() is a good guy and newer sleeps. In addition, s/newer/never/ > it is registered with FTRACE_OPS_FL_DYNAMIC. It causes that > unregister_ftrace_function() calls: > > * schedule_on_each_cpu(ftrace_sync) - always > * synchronize_rcu_tasks() - in preemptive kernel > > The effect is that nobody is neither inside the dynamic trampoline > nor inside the ftrace handler after unregister_ftrace_function() > returns. > > Signed-off-by: Petr Mladek Acked-by: Miroslav Benes > +/* > + * We allow to patch also functions where RCU is not watching, > + * e.g. before user_exit(). We can not rely on the RCU infrastructure > + * to do the synchronization. Instead hard force the sched synchronization. > + * > + * This approach allows to use RCU functions for manipulating func_stack > + * a safe way . s/a safe way /safely/. Miroslav