From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933162AbdJ3Uzq (ORCPT ); Mon, 30 Oct 2017 16:55:46 -0400 Received: from mga05.intel.com ([192.55.52.43]:41678 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753068AbdJ3Uye (ORCPT ); Mon, 30 Oct 2017 16:54:34 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.44,321,1505804400"; d="scan'208";a="329770020" From: Tom Zanussi To: rostedt@goodmis.org Cc: tglx@linutronix.de, mhiramat@kernel.org, namhyung@kernel.org, vedang.patel@intel.com, bigeasy@linutronix.de, joel.opensrc@gmail.com, joelaf@google.com, mathieu.desnoyers@efficios.com, baohong.liu@intel.com, rajvi.jingar@intel.com, julia@ni.com, linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org, Tom Zanussi Subject: [PATCH v4 35/37] tracing: Increase trace_recursive_lock() limit for synthetic events Date: Mon, 30 Oct 2017 15:52:17 -0500 Message-Id: X-Mailer: git-send-email 1.9.3 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Synthetic event generation needs to happen while the current event is still in progress, so add 1 to the trace_recursive_lock() recursion limit to account for that. Because we also want to allow for the possibility of a synthetic event being generated from another synthetic event, add an additional increment for that as well. Signed-off-by: Tom Zanussi --- kernel/trace/ring_buffer.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index ab7b65d..39f1ca0 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -2589,16 +2589,16 @@ static void rb_commit(struct ring_buffer_per_cpu *cpu_buffer, * IRQ context * NMI context * - * If for some reason the ring buffer starts to recurse, we - * only allow that to happen at most 4 times (one for each - * context). If it happens 5 times, then we consider this a - * recusive loop and do not let it go further. + * If for some reason the ring buffer starts to recurse, we only allow + * that to happen at most 6 times (one for each context, plus possibly + * two levels of synthetic event generation). If it happens 7 times, + * then we consider this a recusive loop and do not let it go further. */ static __always_inline int trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer) { - if (cpu_buffer->current_context >= 4) + if (cpu_buffer->current_context >= 6) return 1; cpu_buffer->current_context++; -- 1.9.3