From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA1ADC10F03 for ; Thu, 25 Apr 2019 10:00:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BBD19206BA for ; Thu, 25 Apr 2019 10:00:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387542AbfDYKAH (ORCPT ); Thu, 25 Apr 2019 06:00:07 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:57857 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387410AbfDYKAB (ORCPT ); Thu, 25 Apr 2019 06:00:01 -0400 Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1hJb9k-0001qX-9I; Thu, 25 Apr 2019 11:59:04 +0200 Message-Id: <20190425094801.230654524@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 25 Apr 2019 11:44:54 +0200 From: Thomas Gleixner To: LKML Cc: Josh Poimboeuf , x86@kernel.org, Andy Lutomirski , Steven Rostedt , Alexander Potapenko , Alexey Dobriyan , Andrew Morton , Christoph Lameter , Pekka Enberg , linux-mm@kvack.org, David Rientjes , Catalin Marinas , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Mike Rapoport , Akinobu Mita , Christoph Hellwig , iommu@lists.linux-foundation.org, Robin Murphy , Marek Szyprowski , Johannes Thumshirn , David Sterba , Chris Mason , Josef Bacik , linux-btrfs@vger.kernel.org, dm-devel@redhat.com, Mike Snitzer , Alasdair Kergon , Daniel Vetter , intel-gfx@lists.freedesktop.org, Joonas Lahtinen , Maarten Lankhorst , dri-devel@lists.freedesktop.org, David Airlie , Jani Nikula , Rodrigo Vivi , Tom Zanussi , Miroslav Benes , linux-arch@vger.kernel.org Subject: [patch V3 01/29] tracing: Cleanup stack trace code References: <20190425094453.875139013@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org - Remove the extra array member of stack_dump_trace[] along with the ARRAY_SIZE - 1 initialization for struct stack_trace :: max_entries. Both are historical leftovers of no value. The stack tracer never exceeds the array and there is no extra storage requirement either. - Make variables which are only used in trace_stack.c static. - Simplify the enable/disable logic. - Rename stack_trace_print() as it's using the stack_trace_ namespace. Free the name up for stack trace related functions. Signed-off-by: Thomas Gleixner Reviewed-by: Steven Rostedt --- V3: Remove the -1 init and split the variable declaration as requested by Steven. V2: Add more cleanups and use print_max_stack() as requested by Steven. --- include/linux/ftrace.h | 18 ++++-------------- kernel/trace/trace_stack.c | 42 +++++++++++++----------------------------- 2 files changed, 17 insertions(+), 43 deletions(-) --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -241,21 +241,11 @@ static inline void ftrace_free_mem(struc #ifdef CONFIG_STACK_TRACER -#define STACK_TRACE_ENTRIES 500 - -struct stack_trace; - -extern unsigned stack_trace_index[]; -extern struct stack_trace stack_trace_max; -extern unsigned long stack_trace_max_size; -extern arch_spinlock_t stack_trace_max_lock; - extern int stack_tracer_enabled; -void stack_trace_print(void); -int -stack_trace_sysctl(struct ctl_table *table, int write, - void __user *buffer, size_t *lenp, - loff_t *ppos); + +int stack_trace_sysctl(struct ctl_table *table, int write, + void __user *buffer, size_t *lenp, + loff_t *ppos); /* DO NOT MODIFY THIS VARIABLE DIRECTLY! */ DECLARE_PER_CPU(int, disable_stack_tracer); --- a/kernel/trace/trace_stack.c +++ b/kernel/trace/trace_stack.c @@ -18,30 +18,26 @@ #include "trace.h" -static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES + 1]; -unsigned stack_trace_index[STACK_TRACE_ENTRIES]; +#define STACK_TRACE_ENTRIES 500 + +static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES]; +static unsigned stack_trace_index[STACK_TRACE_ENTRIES]; -/* - * Reserve one entry for the passed in ip. This will allow - * us to remove most or all of the stack size overhead - * added by the stack tracer itself. - */ struct stack_trace stack_trace_max = { - .max_entries = STACK_TRACE_ENTRIES - 1, + .max_entries = STACK_TRACE_ENTRIES, .entries = &stack_dump_trace[0], }; -unsigned long stack_trace_max_size; -arch_spinlock_t stack_trace_max_lock = +static unsigned long stack_trace_max_size; +static arch_spinlock_t stack_trace_max_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; DEFINE_PER_CPU(int, disable_stack_tracer); static DEFINE_MUTEX(stack_sysctl_mutex); int stack_tracer_enabled; -static int last_stack_tracer_enabled; -void stack_trace_print(void) +static void print_max_stack(void) { long i; int size; @@ -61,16 +57,7 @@ void stack_trace_print(void) } } -/* - * When arch-specific code overrides this function, the following - * data should be filled up, assuming stack_trace_max_lock is held to - * prevent concurrent updates. - * stack_trace_index[] - * stack_trace_max - * stack_trace_max_size - */ -void __weak -check_stack(unsigned long ip, unsigned long *stack) +static void check_stack(unsigned long ip, unsigned long *stack) { unsigned long this_size, flags; unsigned long *p, *top, *start; static int tracer_frame; @@ -179,7 +166,7 @@ check_stack(unsigned long ip, unsigned l stack_trace_max.nr_entries = x; if (task_stack_end_corrupted(current)) { - stack_trace_print(); + print_max_stack(); BUG(); } @@ -412,23 +399,21 @@ stack_trace_sysctl(struct ctl_table *tab void __user *buffer, size_t *lenp, loff_t *ppos) { + int was_enabled; int ret; mutex_lock(&stack_sysctl_mutex); + was_enabled = !!stack_tracer_enabled; ret = proc_dointvec(table, write, buffer, lenp, ppos); - if (ret || !write || - (last_stack_tracer_enabled == !!stack_tracer_enabled)) + if (ret || !write || (was_enabled == !!stack_tracer_enabled)) goto out; - last_stack_tracer_enabled = !!stack_tracer_enabled; - if (stack_tracer_enabled) register_ftrace_function(&trace_ops); else unregister_ftrace_function(&trace_ops); - out: mutex_unlock(&stack_sysctl_mutex); return ret; @@ -444,7 +429,6 @@ static __init int enable_stacktrace(char strncpy(stack_trace_filter_buf, str + len, COMMAND_LINE_SIZE); stack_tracer_enabled = 1; - last_stack_tracer_enabled = 1; return 1; } __setup("stacktrace", enable_stacktrace); From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Gleixner Subject: [patch V3 01/29] tracing: Cleanup stack trace code Date: Thu, 25 Apr 2019 11:44:54 +0200 Message-ID: <20190425094801.230654524@linutronix.de> References: <20190425094453.875139013@linutronix.de> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" To: LKML Cc: Mike Snitzer , David Airlie , Catalin Marinas , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Miroslav Benes , Christoph Hellwig , Alasdair Kergon , Marek Szyprowski , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, Johannes Thumshirn , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, David Rientjes , Akinobu Mita , Steven Rostedt , Josef Bacik , Mike Rapoport , Andy Lutomirski , Josh Poimboeuf List-Id: linux-arch.vger.kernel.org LSBSZW1vdmUgdGhlIGV4dHJhIGFycmF5IG1lbWJlciBvZiBzdGFja19kdW1wX3RyYWNlW10gYWxv bmcgd2l0aCB0aGUKICBBUlJBWV9TSVpFIC0gMSBpbml0aWFsaXphdGlvbiBmb3Igc3RydWN0IHN0 YWNrX3RyYWNlIDo6IG1heF9lbnRyaWVzLgoKICBCb3RoIGFyZSBoaXN0b3JpY2FsIGxlZnRvdmVy cyBvZiBubyB2YWx1ZS4gVGhlIHN0YWNrIHRyYWNlciBuZXZlciBleGNlZWRzCiAgdGhlIGFycmF5 IGFuZCB0aGVyZSBpcyBubyBleHRyYSBzdG9yYWdlIHJlcXVpcmVtZW50IGVpdGhlci4KCi0gTWFr ZSB2YXJpYWJsZXMgd2hpY2ggYXJlIG9ubHkgdXNlZCBpbiB0cmFjZV9zdGFjay5jIHN0YXRpYy4K Ci0gU2ltcGxpZnkgdGhlIGVuYWJsZS9kaXNhYmxlIGxvZ2ljLgoKLSBSZW5hbWUgc3RhY2tfdHJh Y2VfcHJpbnQoKSBhcyBpdCdzIHVzaW5nIHRoZSBzdGFja190cmFjZV8gbmFtZXNwYWNlLiBGcmVl CiAgdGhlIG5hbWUgdXAgZm9yIHN0YWNrIHRyYWNlIHJlbGF0ZWQgZnVuY3Rpb25zLgoKU2lnbmVk LW9mZi1ieTogVGhvbWFzIEdsZWl4bmVyIDx0Z2x4QGxpbnV0cm9uaXguZGU+ClJldmlld2VkLWJ5 OiBTdGV2ZW4gUm9zdGVkdCA8cm9zdGVkdEBnb29kbWlzLm9yZz4KLS0tClYzOiBSZW1vdmUgdGhl IC0xIGluaXQgYW5kIHNwbGl0IHRoZSB2YXJpYWJsZSBkZWNsYXJhdGlvbiBhcyByZXF1ZXN0ZWQg YnkgU3RldmVuLgpWMjogQWRkIG1vcmUgY2xlYW51cHMgYW5kIHVzZSBwcmludF9tYXhfc3RhY2so KSBhcyByZXF1ZXN0ZWQgYnkgU3RldmVuLgotLS0KIGluY2x1ZGUvbGludXgvZnRyYWNlLmggICAg IHwgICAxOCArKysrLS0tLS0tLS0tLS0tLS0KIGtlcm5lbC90cmFjZS90cmFjZV9zdGFjay5jIHwg ICA0MiArKysrKysrKysrKysrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIDIgZmlsZXMg Y2hhbmdlZCwgMTcgaW5zZXJ0aW9ucygrKSwgNDMgZGVsZXRpb25zKC0pCgotLS0gYS9pbmNsdWRl L2xpbnV4L2Z0cmFjZS5oCisrKyBiL2luY2x1ZGUvbGludXgvZnRyYWNlLmgKQEAgLTI0MSwyMSAr MjQxLDExIEBAIHN0YXRpYyBpbmxpbmUgdm9pZCBmdHJhY2VfZnJlZV9tZW0oc3RydWMKIAogI2lm ZGVmIENPTkZJR19TVEFDS19UUkFDRVIKIAotI2RlZmluZSBTVEFDS19UUkFDRV9FTlRSSUVTIDUw MAotCi1zdHJ1Y3Qgc3RhY2tfdHJhY2U7Ci0KLWV4dGVybiB1bnNpZ25lZCBzdGFja190cmFjZV9p bmRleFtdOwotZXh0ZXJuIHN0cnVjdCBzdGFja190cmFjZSBzdGFja190cmFjZV9tYXg7Ci1leHRl cm4gdW5zaWduZWQgbG9uZyBzdGFja190cmFjZV9tYXhfc2l6ZTsKLWV4dGVybiBhcmNoX3NwaW5s b2NrX3Qgc3RhY2tfdHJhY2VfbWF4X2xvY2s7Ci0KIGV4dGVybiBpbnQgc3RhY2tfdHJhY2VyX2Vu YWJsZWQ7Ci12b2lkIHN0YWNrX3RyYWNlX3ByaW50KHZvaWQpOwotaW50Ci1zdGFja190cmFjZV9z eXNjdGwoc3RydWN0IGN0bF90YWJsZSAqdGFibGUsIGludCB3cml0ZSwKLQkJICAgdm9pZCBfX3Vz ZXIgKmJ1ZmZlciwgc2l6ZV90ICpsZW5wLAotCQkgICBsb2ZmX3QgKnBwb3MpOworCitpbnQgc3Rh Y2tfdHJhY2Vfc3lzY3RsKHN0cnVjdCBjdGxfdGFibGUgKnRhYmxlLCBpbnQgd3JpdGUsCisJCSAg ICAgICB2b2lkIF9fdXNlciAqYnVmZmVyLCBzaXplX3QgKmxlbnAsCisJCSAgICAgICBsb2ZmX3Qg KnBwb3MpOwogCiAvKiBETyBOT1QgTU9ESUZZIFRISVMgVkFSSUFCTEUgRElSRUNUTFkhICovCiBE RUNMQVJFX1BFUl9DUFUoaW50LCBkaXNhYmxlX3N0YWNrX3RyYWNlcik7Ci0tLSBhL2tlcm5lbC90 cmFjZS90cmFjZV9zdGFjay5jCisrKyBiL2tlcm5lbC90cmFjZS90cmFjZV9zdGFjay5jCkBAIC0x OCwzMCArMTgsMjYgQEAKIAogI2luY2x1ZGUgInRyYWNlLmgiCiAKLXN0YXRpYyB1bnNpZ25lZCBs b25nIHN0YWNrX2R1bXBfdHJhY2VbU1RBQ0tfVFJBQ0VfRU5UUklFUyArIDFdOwotdW5zaWduZWQg c3RhY2tfdHJhY2VfaW5kZXhbU1RBQ0tfVFJBQ0VfRU5UUklFU107CisjZGVmaW5lIFNUQUNLX1RS QUNFX0VOVFJJRVMgNTAwCisKK3N0YXRpYyB1bnNpZ25lZCBsb25nIHN0YWNrX2R1bXBfdHJhY2Vb U1RBQ0tfVFJBQ0VfRU5UUklFU107CitzdGF0aWMgdW5zaWduZWQgc3RhY2tfdHJhY2VfaW5kZXhb U1RBQ0tfVFJBQ0VfRU5UUklFU107CiAKLS8qCi0gKiBSZXNlcnZlIG9uZSBlbnRyeSBmb3IgdGhl IHBhc3NlZCBpbiBpcC4gVGhpcyB3aWxsIGFsbG93Ci0gKiB1cyB0byByZW1vdmUgbW9zdCBvciBh bGwgb2YgdGhlIHN0YWNrIHNpemUgb3ZlcmhlYWQKLSAqIGFkZGVkIGJ5IHRoZSBzdGFjayB0cmFj ZXIgaXRzZWxmLgotICovCiBzdHJ1Y3Qgc3RhY2tfdHJhY2Ugc3RhY2tfdHJhY2VfbWF4ID0gewot CS5tYXhfZW50cmllcwkJPSBTVEFDS19UUkFDRV9FTlRSSUVTIC0gMSwKKwkubWF4X2VudHJpZXMJ CT0gU1RBQ0tfVFJBQ0VfRU5UUklFUywKIAkuZW50cmllcwkJPSAmc3RhY2tfZHVtcF90cmFjZVsw XSwKIH07CiAKLXVuc2lnbmVkIGxvbmcgc3RhY2tfdHJhY2VfbWF4X3NpemU7Ci1hcmNoX3NwaW5s b2NrX3Qgc3RhY2tfdHJhY2VfbWF4X2xvY2sgPQorc3RhdGljIHVuc2lnbmVkIGxvbmcgc3RhY2tf dHJhY2VfbWF4X3NpemU7CitzdGF0aWMgYXJjaF9zcGlubG9ja190IHN0YWNrX3RyYWNlX21heF9s b2NrID0KIAkoYXJjaF9zcGlubG9ja190KV9fQVJDSF9TUElOX0xPQ0tfVU5MT0NLRUQ7CiAKIERF RklORV9QRVJfQ1BVKGludCwgZGlzYWJsZV9zdGFja190cmFjZXIpOwogc3RhdGljIERFRklORV9N VVRFWChzdGFja19zeXNjdGxfbXV0ZXgpOwogCiBpbnQgc3RhY2tfdHJhY2VyX2VuYWJsZWQ7Ci1z dGF0aWMgaW50IGxhc3Rfc3RhY2tfdHJhY2VyX2VuYWJsZWQ7CiAKLXZvaWQgc3RhY2tfdHJhY2Vf cHJpbnQodm9pZCkKK3N0YXRpYyB2b2lkIHByaW50X21heF9zdGFjayh2b2lkKQogewogCWxvbmcg aTsKIAlpbnQgc2l6ZTsKQEAgLTYxLDE2ICs1Nyw3IEBAIHZvaWQgc3RhY2tfdHJhY2VfcHJpbnQo dm9pZCkKIAl9CiB9CiAKLS8qCi0gKiBXaGVuIGFyY2gtc3BlY2lmaWMgY29kZSBvdmVycmlkZXMg dGhpcyBmdW5jdGlvbiwgdGhlIGZvbGxvd2luZwotICogZGF0YSBzaG91bGQgYmUgZmlsbGVkIHVw LCBhc3N1bWluZyBzdGFja190cmFjZV9tYXhfbG9jayBpcyBoZWxkIHRvCi0gKiBwcmV2ZW50IGNv bmN1cnJlbnQgdXBkYXRlcy4KLSAqICAgICBzdGFja190cmFjZV9pbmRleFtdCi0gKiAgICAgc3Rh Y2tfdHJhY2VfbWF4Ci0gKiAgICAgc3RhY2tfdHJhY2VfbWF4X3NpemUKLSAqLwotdm9pZCBfX3dl YWsKLWNoZWNrX3N0YWNrKHVuc2lnbmVkIGxvbmcgaXAsIHVuc2lnbmVkIGxvbmcgKnN0YWNrKQor c3RhdGljIHZvaWQgY2hlY2tfc3RhY2sodW5zaWduZWQgbG9uZyBpcCwgdW5zaWduZWQgbG9uZyAq c3RhY2spCiB7CiAJdW5zaWduZWQgbG9uZyB0aGlzX3NpemUsIGZsYWdzOyB1bnNpZ25lZCBsb25n ICpwLCAqdG9wLCAqc3RhcnQ7CiAJc3RhdGljIGludCB0cmFjZXJfZnJhbWU7CkBAIC0xNzksNyAr MTY2LDcgQEAgY2hlY2tfc3RhY2sodW5zaWduZWQgbG9uZyBpcCwgdW5zaWduZWQgbAogCXN0YWNr X3RyYWNlX21heC5ucl9lbnRyaWVzID0geDsKIAogCWlmICh0YXNrX3N0YWNrX2VuZF9jb3JydXB0 ZWQoY3VycmVudCkpIHsKLQkJc3RhY2tfdHJhY2VfcHJpbnQoKTsKKwkJcHJpbnRfbWF4X3N0YWNr KCk7CiAJCUJVRygpOwogCX0KIApAQCAtNDEyLDIzICszOTksMjEgQEAgc3RhY2tfdHJhY2Vfc3lz Y3RsKHN0cnVjdCBjdGxfdGFibGUgKnRhYgogCQkgICB2b2lkIF9fdXNlciAqYnVmZmVyLCBzaXpl X3QgKmxlbnAsCiAJCSAgIGxvZmZfdCAqcHBvcykKIHsKKwlpbnQgd2FzX2VuYWJsZWQ7CiAJaW50 IHJldDsKIAogCW11dGV4X2xvY2soJnN0YWNrX3N5c2N0bF9tdXRleCk7CisJd2FzX2VuYWJsZWQg PSAhIXN0YWNrX3RyYWNlcl9lbmFibGVkOwogCiAJcmV0ID0gcHJvY19kb2ludHZlYyh0YWJsZSwg d3JpdGUsIGJ1ZmZlciwgbGVucCwgcHBvcyk7CiAKLQlpZiAocmV0IHx8ICF3cml0ZSB8fAotCSAg ICAobGFzdF9zdGFja190cmFjZXJfZW5hYmxlZCA9PSAhIXN0YWNrX3RyYWNlcl9lbmFibGVkKSkK KwlpZiAocmV0IHx8ICF3cml0ZSB8fCAod2FzX2VuYWJsZWQgPT0gISFzdGFja190cmFjZXJfZW5h YmxlZCkpCiAJCWdvdG8gb3V0OwogCi0JbGFzdF9zdGFja190cmFjZXJfZW5hYmxlZCA9ICEhc3Rh Y2tfdHJhY2VyX2VuYWJsZWQ7Ci0KIAlpZiAoc3RhY2tfdHJhY2VyX2VuYWJsZWQpCiAJCXJlZ2lz dGVyX2Z0cmFjZV9mdW5jdGlvbigmdHJhY2Vfb3BzKTsKIAllbHNlCiAJCXVucmVnaXN0ZXJfZnRy YWNlX2Z1bmN0aW9uKCZ0cmFjZV9vcHMpOwotCiAgb3V0OgogCW11dGV4X3VubG9jaygmc3RhY2tf c3lzY3RsX211dGV4KTsKIAlyZXR1cm4gcmV0OwpAQCAtNDQ0LDcgKzQyOSw2IEBAIHN0YXRpYyBf X2luaXQgaW50IGVuYWJsZV9zdGFja3RyYWNlKGNoYXIKIAkJc3RybmNweShzdGFja190cmFjZV9m aWx0ZXJfYnVmLCBzdHIgKyBsZW4sIENPTU1BTkRfTElORV9TSVpFKTsKIAogCXN0YWNrX3RyYWNl cl9lbmFibGVkID0gMTsKLQlsYXN0X3N0YWNrX3RyYWNlcl9lbmFibGVkID0gMTsKIAlyZXR1cm4g MTsKIH0KIF9fc2V0dXAoInN0YWNrdHJhY2UiLCBlbmFibGVfc3RhY2t0cmFjZSk7CgoKX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KSW50ZWwtZ2Z4IG1haWxp bmcgbGlzdApJbnRlbC1nZnhAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJl ZWRlc2t0b3Aub3JnL21haWxtYW4vbGlzdGluZm8vaW50ZWwtZ2Z4 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD928C10F03 for ; Thu, 25 Apr 2019 10:35:51 +0000 (UTC) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A8B592084B for ; Thu, 25 Apr 2019 10:35:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A8B592084B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id AC29D1D68; Thu, 25 Apr 2019 10:34:49 +0000 (UTC) Received: from smtp2.linuxfoundation.org (smtp2.linux-foundation.org [172.17.192.36]) by mail.linuxfoundation.org (Postfix) with ESMTPS id C6BC61D27 for ; Thu, 25 Apr 2019 10:34:25 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from Galois.linutronix.de (Galois.linutronix.de [146.0.238.70]) by smtp2.linuxfoundation.org (Postfix) with ESMTPS id 33DF11DD44 for ; Thu, 25 Apr 2019 10:34:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1hJb9k-0001qX-9I; Thu, 25 Apr 2019 11:59:04 +0200 Message-Id: <20190425094801.230654524@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 25 Apr 2019 11:44:54 +0200 From: Thomas Gleixner To: LKML Subject: [patch V3 01/29] tracing: Cleanup stack trace code References: <20190425094453.875139013@linutronix.de> MIME-Version: 1.0 Cc: Mike Snitzer , David Airlie , Catalin Marinas , Joonas Lahtinen , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Miroslav Benes , Christoph Hellwig , Alasdair Kergon , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, Johannes Thumshirn , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, David Rientjes , Maarten Lankhorst , Akinobu Mita , Steven Rostedt , Josef Bacik , Rodrigo Vivi , Mike Rapoport , Jani Nikula , Andy Lutomirski , Josh Poimboeuf , David Sterba , Dmitry Vyukov , Tom Zanussi , Chris Mason , Pekka Enberg , iommu@lists.linux-foundation.org, Daniel Vetter , Andrew Morton , Robin Murphy , linux-btrfs@vger.kernel.org X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Sender: iommu-bounces@lists.linux-foundation.org Errors-To: iommu-bounces@lists.linux-foundation.org Message-ID: <20190425094454.eqDbI7Ramqk2CDHO2OhKRLZ-m7O_XWHh1tJtAHCdr7M@z> - Remove the extra array member of stack_dump_trace[] along with the ARRAY_SIZE - 1 initialization for struct stack_trace :: max_entries. Both are historical leftovers of no value. The stack tracer never exceeds the array and there is no extra storage requirement either. - Make variables which are only used in trace_stack.c static. - Simplify the enable/disable logic. - Rename stack_trace_print() as it's using the stack_trace_ namespace. Free the name up for stack trace related functions. Signed-off-by: Thomas Gleixner Reviewed-by: Steven Rostedt --- V3: Remove the -1 init and split the variable declaration as requested by Steven. V2: Add more cleanups and use print_max_stack() as requested by Steven. --- include/linux/ftrace.h | 18 ++++-------------- kernel/trace/trace_stack.c | 42 +++++++++++++----------------------------- 2 files changed, 17 insertions(+), 43 deletions(-) --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -241,21 +241,11 @@ static inline void ftrace_free_mem(struc #ifdef CONFIG_STACK_TRACER -#define STACK_TRACE_ENTRIES 500 - -struct stack_trace; - -extern unsigned stack_trace_index[]; -extern struct stack_trace stack_trace_max; -extern unsigned long stack_trace_max_size; -extern arch_spinlock_t stack_trace_max_lock; - extern int stack_tracer_enabled; -void stack_trace_print(void); -int -stack_trace_sysctl(struct ctl_table *table, int write, - void __user *buffer, size_t *lenp, - loff_t *ppos); + +int stack_trace_sysctl(struct ctl_table *table, int write, + void __user *buffer, size_t *lenp, + loff_t *ppos); /* DO NOT MODIFY THIS VARIABLE DIRECTLY! */ DECLARE_PER_CPU(int, disable_stack_tracer); --- a/kernel/trace/trace_stack.c +++ b/kernel/trace/trace_stack.c @@ -18,30 +18,26 @@ #include "trace.h" -static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES + 1]; -unsigned stack_trace_index[STACK_TRACE_ENTRIES]; +#define STACK_TRACE_ENTRIES 500 + +static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES]; +static unsigned stack_trace_index[STACK_TRACE_ENTRIES]; -/* - * Reserve one entry for the passed in ip. This will allow - * us to remove most or all of the stack size overhead - * added by the stack tracer itself. - */ struct stack_trace stack_trace_max = { - .max_entries = STACK_TRACE_ENTRIES - 1, + .max_entries = STACK_TRACE_ENTRIES, .entries = &stack_dump_trace[0], }; -unsigned long stack_trace_max_size; -arch_spinlock_t stack_trace_max_lock = +static unsigned long stack_trace_max_size; +static arch_spinlock_t stack_trace_max_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; DEFINE_PER_CPU(int, disable_stack_tracer); static DEFINE_MUTEX(stack_sysctl_mutex); int stack_tracer_enabled; -static int last_stack_tracer_enabled; -void stack_trace_print(void) +static void print_max_stack(void) { long i; int size; @@ -61,16 +57,7 @@ void stack_trace_print(void) } } -/* - * When arch-specific code overrides this function, the following - * data should be filled up, assuming stack_trace_max_lock is held to - * prevent concurrent updates. - * stack_trace_index[] - * stack_trace_max - * stack_trace_max_size - */ -void __weak -check_stack(unsigned long ip, unsigned long *stack) +static void check_stack(unsigned long ip, unsigned long *stack) { unsigned long this_size, flags; unsigned long *p, *top, *start; static int tracer_frame; @@ -179,7 +166,7 @@ check_stack(unsigned long ip, unsigned l stack_trace_max.nr_entries = x; if (task_stack_end_corrupted(current)) { - stack_trace_print(); + print_max_stack(); BUG(); } @@ -412,23 +399,21 @@ stack_trace_sysctl(struct ctl_table *tab void __user *buffer, size_t *lenp, loff_t *ppos) { + int was_enabled; int ret; mutex_lock(&stack_sysctl_mutex); + was_enabled = !!stack_tracer_enabled; ret = proc_dointvec(table, write, buffer, lenp, ppos); - if (ret || !write || - (last_stack_tracer_enabled == !!stack_tracer_enabled)) + if (ret || !write || (was_enabled == !!stack_tracer_enabled)) goto out; - last_stack_tracer_enabled = !!stack_tracer_enabled; - if (stack_tracer_enabled) register_ftrace_function(&trace_ops); else unregister_ftrace_function(&trace_ops); - out: mutex_unlock(&stack_sysctl_mutex); return ret; @@ -444,7 +429,6 @@ static __init int enable_stacktrace(char strncpy(stack_trace_filter_buf, str + len, COMMAND_LINE_SIZE); stack_tracer_enabled = 1; - last_stack_tracer_enabled = 1; return 1; } __setup("stacktrace", enable_stacktrace); _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu