From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42B8CC10F03 for ; Thu, 25 Apr 2019 10:01:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0A33F206BA for ; Thu, 25 Apr 2019 10:01:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728847AbfDYKBb (ORCPT ); Thu, 25 Apr 2019 06:01:31 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:58044 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727973AbfDYKAm (ORCPT ); Thu, 25 Apr 2019 06:00:42 -0400 Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1hJbAP-00020j-QB; Thu, 25 Apr 2019 11:59:46 +0200 Message-Id: <20190425094803.713568606@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 25 Apr 2019 11:45:21 +0200 From: Thomas Gleixner To: LKML Cc: Josh Poimboeuf , x86@kernel.org, Andy Lutomirski , linux-arch@vger.kernel.org, Steven Rostedt , Alexander Potapenko , Alexey Dobriyan , Andrew Morton , Christoph Lameter , Pekka Enberg , linux-mm@kvack.org, David Rientjes , Catalin Marinas , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Mike Rapoport , Akinobu Mita , Christoph Hellwig , iommu@lists.linux-foundation.org, Robin Murphy , Marek Szyprowski , Johannes Thumshirn , David Sterba , Chris Mason , Josef Bacik , linux-btrfs@vger.kernel.org, dm-devel@redhat.com, Mike Snitzer , Alasdair Kergon , Daniel Vetter , intel-gfx@lists.freedesktop.org, Joonas Lahtinen , Maarten Lankhorst , dri-devel@lists.freedesktop.org, David Airlie , Jani Nikula , Rodrigo Vivi , Tom Zanussi , Miroslav Benes Subject: [patch V3 28/29] stacktrace: Provide common infrastructure References: <20190425094453.875139013@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org All architectures which support stacktrace carry duplicated code and do the stack storage and filtering at the architecture side. Provide a consolidated interface with a callback function for consuming the stack entries provided by the architecture specific stack walker. This removes lots of duplicated code and allows to implement better filtering than 'skip number of entries' in the future without touching any architecture specific code. Signed-off-by: Thomas Gleixner Cc: linux-arch@vger.kernel.org --- V3: Fix kernel doc --- include/linux/stacktrace.h | 39 ++++++++++ kernel/stacktrace.c | 173 +++++++++++++++++++++++++++++++++++++++++++++ lib/Kconfig | 4 + 3 files changed, 216 insertions(+) --- a/include/linux/stacktrace.h +++ b/include/linux/stacktrace.h @@ -23,6 +23,44 @@ unsigned int stack_trace_save_regs(struc unsigned int stack_trace_save_user(unsigned long *store, unsigned int size); /* Internal interfaces. Do not use in generic code */ +#ifdef CONFIG_ARCH_STACKWALK + +/** + * stack_trace_consume_fn - Callback for arch_stack_walk() + * @cookie: Caller supplied pointer handed back by arch_stack_walk() + * @addr: The stack entry address to consume + * @reliable: True when the stack entry is reliable. Required by + * some printk based consumers. + * + * Return: True, if the entry was consumed or skipped + * False, if there is no space left to store + */ +typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr, + bool reliable); +/** + * arch_stack_walk - Architecture specific function to walk the stack + * @consume_entry: Callback which is invoked by the architecture code for + * each entry. + * @cookie: Caller supplied pointer which is handed back to + * @consume_entry + * @task: Pointer to a task struct, can be NULL + * @regs: Pointer to registers, can be NULL + * + * ============ ======= ============================================ + * task regs + * ============ ======= ============================================ + * task NULL Stack trace from task (can be current) + * current regs Stack trace starting on regs->stackpointer + * ============ ======= ============================================ + */ +void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, + struct task_struct *task, struct pt_regs *regs); +int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie, + struct task_struct *task); +void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie, + const struct pt_regs *regs); + +#else /* CONFIG_ARCH_STACKWALK */ struct stack_trace { unsigned int nr_entries, max_entries; unsigned long *entries; @@ -37,6 +75,7 @@ extern void save_stack_trace_tsk(struct extern int save_stack_trace_tsk_reliable(struct task_struct *tsk, struct stack_trace *trace); extern void save_stack_trace_user(struct stack_trace *trace); +#endif /* !CONFIG_ARCH_STACKWALK */ #endif /* CONFIG_STACKTRACE */ #if defined(CONFIG_STACKTRACE) && defined(CONFIG_HAVE_RELIABLE_STACKTRACE) --- a/kernel/stacktrace.c +++ b/kernel/stacktrace.c @@ -5,6 +5,8 @@ * * Copyright (C) 2006 Red Hat, Inc., Ingo Molnar */ +#include +#include #include #include #include @@ -66,6 +68,175 @@ int stack_trace_snprint(char *buf, size_ } EXPORT_SYMBOL_GPL(stack_trace_snprint); +#ifdef CONFIG_ARCH_STACKWALK + +struct stacktrace_cookie { + unsigned long *store; + unsigned int size; + unsigned int skip; + unsigned int len; +}; + +static bool stack_trace_consume_entry(void *cookie, unsigned long addr, + bool reliable) +{ + struct stacktrace_cookie *c = cookie; + + if (c->len >= c->size) + return false; + + if (c->skip > 0) { + c->skip--; + return true; + } + c->store[c->len++] = addr; + return c->len < c->size; +} + +static bool stack_trace_consume_entry_nosched(void *cookie, unsigned long addr, + bool reliable) +{ + if (in_sched_functions(addr)) + return true; + return stack_trace_consume_entry(cookie, addr, reliable); +} + +/** + * stack_trace_save - Save a stack trace into a storage array + * @store: Pointer to storage array + * @size: Size of the storage array + * @skipnr: Number of entries to skip at the start of the stack trace + * + * Return: Number of trace entries stored. + */ +unsigned int stack_trace_save(unsigned long *store, unsigned int size, + unsigned int skipnr) +{ + stack_trace_consume_fn consume_entry = stack_trace_consume_entry; + struct stacktrace_cookie c = { + .store = store, + .size = size, + .skip = skipnr + 1, + }; + + arch_stack_walk(consume_entry, &c, current, NULL); + return c.len; +} +EXPORT_SYMBOL_GPL(stack_trace_save); + +/** + * stack_trace_save_tsk - Save a task stack trace into a storage array + * @task: The task to examine + * @store: Pointer to storage array + * @size: Size of the storage array + * @skipnr: Number of entries to skip at the start of the stack trace + * + * Return: Number of trace entries stored. + */ +unsigned int stack_trace_save_tsk(struct task_struct *tsk, unsigned long *store, + unsigned int size, unsigned int skipnr) +{ + stack_trace_consume_fn consume_entry = stack_trace_consume_entry_nosched; + struct stacktrace_cookie c = { + .store = store, + .size = size, + .skip = skipnr + 1, + }; + + if (!try_get_task_stack(tsk)) + return 0; + + arch_stack_walk(consume_entry, &c, tsk, NULL); + put_task_stack(tsk); + return c.len; +} + +/** + * stack_trace_save_regs - Save a stack trace based on pt_regs into a storage array + * @regs: Pointer to pt_regs to examine + * @store: Pointer to storage array + * @size: Size of the storage array + * @skipnr: Number of entries to skip at the start of the stack trace + * + * Return: Number of trace entries stored. + */ +unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store, + unsigned int size, unsigned int skipnr) +{ + stack_trace_consume_fn consume_entry = stack_trace_consume_entry; + struct stacktrace_cookie c = { + .store = store, + .size = size, + .skip = skipnr, + }; + + arch_stack_walk(consume_entry, &c, current, regs); + return c.len; +} + +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE +/** + * stack_trace_save_tsk_reliable - Save task stack with verification + * @tsk: Pointer to the task to examine + * @store: Pointer to storage array + * @size: Size of the storage array + * + * Return: An error if it detects any unreliable features of the + * stack. Otherwise it guarantees that the stack trace is + * reliable and returns the number of entries stored. + * + * If the task is not 'current', the caller *must* ensure the task is inactive. + */ +int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store, + unsigned int size) +{ + stack_trace_consume_fn consume_entry = stack_trace_consume_entry; + struct stacktrace_cookie c = { + .store = store, + .size = size, + }; + int ret; + + /* + * If the task doesn't have a stack (e.g., a zombie), the stack is + * "reliably" empty. + */ + if (!try_get_task_stack(tsk)) + return 0; + + ret = arch_stack_walk_reliable(consume_entry, &c, tsk); + put_task_stack(tsk); + return ret; +} +#endif + +#ifdef CONFIG_USER_STACKTRACE_SUPPORT +/** + * stack_trace_save_user - Save a user space stack trace into a storage array + * @store: Pointer to storage array + * @size: Size of the storage array + * + * Return: Number of trace entries stored. + */ +unsigned int stack_trace_save_user(unsigned long *store, unsigned int size) +{ + stack_trace_consume_fn consume_entry = stack_trace_consume_entry; + struct stacktrace_cookie c = { + .store = store, + .size = size, + }; + + /* Trace user stack if not a kernel thread */ + if (!current->mm) + return 0; + + arch_stack_walk_user(consume_entry, &c, task_pt_regs(current)); + return c.len; +} +#endif + +#else /* CONFIG_ARCH_STACKWALK */ + /* * Architectures that do not implement save_stack_trace_*() * get these weak aliases and once-per-bootup warnings @@ -203,3 +374,5 @@ unsigned int stack_trace_save_user(unsig return trace.nr_entries; } #endif /* CONFIG_USER_STACKTRACE_SUPPORT */ + +#endif /* !CONFIG_ARCH_STACKWALK */ --- a/lib/Kconfig +++ b/lib/Kconfig @@ -597,6 +597,10 @@ config ARCH_HAS_UACCESS_FLUSHCACHE config ARCH_HAS_UACCESS_MCSAFE bool +# Temporary. Goes away when all archs are cleaned up +config ARCH_STACKWALK + bool + config STACKDEPOT bool select STACKTRACE From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Gleixner Subject: [patch V3 28/29] stacktrace: Provide common infrastructure Date: Thu, 25 Apr 2019 11:45:21 +0200 Message-ID: <20190425094803.713568606@linutronix.de> References: <20190425094453.875139013@linutronix.de> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" To: LKML Cc: Mike Snitzer , David Airlie , Catalin Marinas , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Miroslav Benes , Christoph Hellwig , Alasdair Kergon , Marek Szyprowski , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, David Rientjes , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, Johannes Thumshirn , Akinobu Mita , Steven Rostedt , Josef Bacik , Mike Rapoport , Andy Lutomirski , Josh Poimboeuf List-Id: linux-arch.vger.kernel.org QWxsIGFyY2hpdGVjdHVyZXMgd2hpY2ggc3VwcG9ydCBzdGFja3RyYWNlIGNhcnJ5IGR1cGxpY2F0 ZWQgY29kZSBhbmQKZG8gdGhlIHN0YWNrIHN0b3JhZ2UgYW5kIGZpbHRlcmluZyBhdCB0aGUgYXJj aGl0ZWN0dXJlIHNpZGUuCgpQcm92aWRlIGEgY29uc29saWRhdGVkIGludGVyZmFjZSB3aXRoIGEg Y2FsbGJhY2sgZnVuY3Rpb24gZm9yIGNvbnN1bWluZyB0aGUKc3RhY2sgZW50cmllcyBwcm92aWRl ZCBieSB0aGUgYXJjaGl0ZWN0dXJlIHNwZWNpZmljIHN0YWNrIHdhbGtlci4gVGhpcwpyZW1vdmVz IGxvdHMgb2YgZHVwbGljYXRlZCBjb2RlIGFuZCBhbGxvd3MgdG8gaW1wbGVtZW50IGJldHRlciBm aWx0ZXJpbmcKdGhhbiAnc2tpcCBudW1iZXIgb2YgZW50cmllcycgaW4gdGhlIGZ1dHVyZSB3aXRo b3V0IHRvdWNoaW5nIGFueQphcmNoaXRlY3R1cmUgc3BlY2lmaWMgY29kZS4KClNpZ25lZC1vZmYt Ynk6IFRob21hcyBHbGVpeG5lciA8dGdseEBsaW51dHJvbml4LmRlPgpDYzogbGludXgtYXJjaEB2 Z2VyLmtlcm5lbC5vcmcKLS0tClYzOiBGaXgga2VybmVsIGRvYwotLS0KIGluY2x1ZGUvbGludXgv c3RhY2t0cmFjZS5oIHwgICAzOSArKysrKysrKysrCiBrZXJuZWwvc3RhY2t0cmFjZS5jICAgICAg ICB8ICAxNzMgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrCiBs aWIvS2NvbmZpZyAgICAgICAgICAgICAgICB8ICAgIDQgKwogMyBmaWxlcyBjaGFuZ2VkLCAyMTYg aW5zZXJ0aW9ucygrKQoKLS0tIGEvaW5jbHVkZS9saW51eC9zdGFja3RyYWNlLmgKKysrIGIvaW5j bHVkZS9saW51eC9zdGFja3RyYWNlLmgKQEAgLTIzLDYgKzIzLDQ0IEBAIHVuc2lnbmVkIGludCBz dGFja190cmFjZV9zYXZlX3JlZ3Moc3RydWMKIHVuc2lnbmVkIGludCBzdGFja190cmFjZV9zYXZl X3VzZXIodW5zaWduZWQgbG9uZyAqc3RvcmUsIHVuc2lnbmVkIGludCBzaXplKTsKIAogLyogSW50 ZXJuYWwgaW50ZXJmYWNlcy4gRG8gbm90IHVzZSBpbiBnZW5lcmljIGNvZGUgKi8KKyNpZmRlZiBD T05GSUdfQVJDSF9TVEFDS1dBTEsKKworLyoqCisgKiBzdGFja190cmFjZV9jb25zdW1lX2ZuIC0g Q2FsbGJhY2sgZm9yIGFyY2hfc3RhY2tfd2FsaygpCisgKiBAY29va2llOglDYWxsZXIgc3VwcGxp ZWQgcG9pbnRlciBoYW5kZWQgYmFjayBieSBhcmNoX3N0YWNrX3dhbGsoKQorICogQGFkZHI6CVRo ZSBzdGFjayBlbnRyeSBhZGRyZXNzIHRvIGNvbnN1bWUKKyAqIEByZWxpYWJsZToJVHJ1ZSB3aGVu IHRoZSBzdGFjayBlbnRyeSBpcyByZWxpYWJsZS4gUmVxdWlyZWQgYnkKKyAqCQlzb21lIHByaW50 ayBiYXNlZCBjb25zdW1lcnMuCisgKgorICogUmV0dXJuOglUcnVlLCBpZiB0aGUgZW50cnkgd2Fz IGNvbnN1bWVkIG9yIHNraXBwZWQKKyAqCQlGYWxzZSwgaWYgdGhlcmUgaXMgbm8gc3BhY2UgbGVm dCB0byBzdG9yZQorICovCit0eXBlZGVmIGJvb2wgKCpzdGFja190cmFjZV9jb25zdW1lX2ZuKSh2 b2lkICpjb29raWUsIHVuc2lnbmVkIGxvbmcgYWRkciwKKwkJCQkgICAgICAgYm9vbCByZWxpYWJs ZSk7CisvKioKKyAqIGFyY2hfc3RhY2tfd2FsayAtIEFyY2hpdGVjdHVyZSBzcGVjaWZpYyBmdW5j dGlvbiB0byB3YWxrIHRoZSBzdGFjaworICogQGNvbnN1bWVfZW50cnk6CUNhbGxiYWNrIHdoaWNo IGlzIGludm9rZWQgYnkgdGhlIGFyY2hpdGVjdHVyZSBjb2RlIGZvcgorICoJCQllYWNoIGVudHJ5 LgorICogQGNvb2tpZToJCUNhbGxlciBzdXBwbGllZCBwb2ludGVyIHdoaWNoIGlzIGhhbmRlZCBi YWNrIHRvCisgKgkJCUBjb25zdW1lX2VudHJ5CisgKiBAdGFzazoJCVBvaW50ZXIgdG8gYSB0YXNr IHN0cnVjdCwgY2FuIGJlIE5VTEwKKyAqIEByZWdzOgkJUG9pbnRlciB0byByZWdpc3RlcnMsIGNh biBiZSBOVUxMCisgKgorICogPT09PT09PT09PT09ID09PT09PT0gPT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT0KKyAqIHRhc2sJICAgICAgICByZWdzCisgKiA9PT09 PT09PT09PT0gPT09PT09PSA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PQorICogdGFzawkJTlVMTAlTdGFjayB0cmFjZSBmcm9tIHRhc2sgKGNhbiBiZSBjdXJyZW50 KQorICogY3VycmVudAlyZWdzCVN0YWNrIHRyYWNlIHN0YXJ0aW5nIG9uIHJlZ3MtPnN0YWNrcG9p bnRlcgorICogPT09PT09PT09PT09ID09PT09PT0gPT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT0KKyAqLwordm9pZCBhcmNoX3N0YWNrX3dhbGsoc3RhY2tfdHJhY2Vf Y29uc3VtZV9mbiBjb25zdW1lX2VudHJ5LCB2b2lkICpjb29raWUsCisJCSAgICAgc3RydWN0IHRh c2tfc3RydWN0ICp0YXNrLCBzdHJ1Y3QgcHRfcmVncyAqcmVncyk7CitpbnQgYXJjaF9zdGFja193 YWxrX3JlbGlhYmxlKHN0YWNrX3RyYWNlX2NvbnN1bWVfZm4gY29uc3VtZV9lbnRyeSwgdm9pZCAq Y29va2llLAorCQkJICAgICBzdHJ1Y3QgdGFza19zdHJ1Y3QgKnRhc2spOwordm9pZCBhcmNoX3N0 YWNrX3dhbGtfdXNlcihzdGFja190cmFjZV9jb25zdW1lX2ZuIGNvbnN1bWVfZW50cnksIHZvaWQg KmNvb2tpZSwKKwkJCSAgY29uc3Qgc3RydWN0IHB0X3JlZ3MgKnJlZ3MpOworCisjZWxzZSAvKiBD T05GSUdfQVJDSF9TVEFDS1dBTEsgKi8KIHN0cnVjdCBzdGFja190cmFjZSB7CiAJdW5zaWduZWQg aW50IG5yX2VudHJpZXMsIG1heF9lbnRyaWVzOwogCXVuc2lnbmVkIGxvbmcgKmVudHJpZXM7CkBA IC0zNyw2ICs3NSw3IEBAIGV4dGVybiB2b2lkIHNhdmVfc3RhY2tfdHJhY2VfdHNrKHN0cnVjdAog ZXh0ZXJuIGludCBzYXZlX3N0YWNrX3RyYWNlX3Rza19yZWxpYWJsZShzdHJ1Y3QgdGFza19zdHJ1 Y3QgKnRzaywKIAkJCQkJIHN0cnVjdCBzdGFja190cmFjZSAqdHJhY2UpOwogZXh0ZXJuIHZvaWQg c2F2ZV9zdGFja190cmFjZV91c2VyKHN0cnVjdCBzdGFja190cmFjZSAqdHJhY2UpOworI2VuZGlm IC8qICFDT05GSUdfQVJDSF9TVEFDS1dBTEsgKi8KICNlbmRpZiAvKiBDT05GSUdfU1RBQ0tUUkFD RSAqLwogCiAjaWYgZGVmaW5lZChDT05GSUdfU1RBQ0tUUkFDRSkgJiYgZGVmaW5lZChDT05GSUdf SEFWRV9SRUxJQUJMRV9TVEFDS1RSQUNFKQotLS0gYS9rZXJuZWwvc3RhY2t0cmFjZS5jCisrKyBi L2tlcm5lbC9zdGFja3RyYWNlLmMKQEAgLTUsNiArNSw4IEBACiAgKgogICogIENvcHlyaWdodCAo QykgMjAwNiBSZWQgSGF0LCBJbmMuLCBJbmdvIE1vbG5hciA8bWluZ29AcmVkaGF0LmNvbT4KICAq LworI2luY2x1ZGUgPGxpbnV4L3NjaGVkL3Rhc2tfc3RhY2suaD4KKyNpbmNsdWRlIDxsaW51eC9z Y2hlZC9kZWJ1Zy5oPgogI2luY2x1ZGUgPGxpbnV4L3NjaGVkLmg+CiAjaW5jbHVkZSA8bGludXgv a2VybmVsLmg+CiAjaW5jbHVkZSA8bGludXgvZXhwb3J0Lmg+CkBAIC02Niw2ICs2OCwxNzUgQEAg aW50IHN0YWNrX3RyYWNlX3NucHJpbnQoY2hhciAqYnVmLCBzaXplXwogfQogRVhQT1JUX1NZTUJP TF9HUEwoc3RhY2tfdHJhY2Vfc25wcmludCk7CiAKKyNpZmRlZiBDT05GSUdfQVJDSF9TVEFDS1dB TEsKKworc3RydWN0IHN0YWNrdHJhY2VfY29va2llIHsKKwl1bnNpZ25lZCBsb25nCSpzdG9yZTsK Kwl1bnNpZ25lZCBpbnQJc2l6ZTsKKwl1bnNpZ25lZCBpbnQJc2tpcDsKKwl1bnNpZ25lZCBpbnQJ bGVuOworfTsKKworc3RhdGljIGJvb2wgc3RhY2tfdHJhY2VfY29uc3VtZV9lbnRyeSh2b2lkICpj b29raWUsIHVuc2lnbmVkIGxvbmcgYWRkciwKKwkJCQkgICAgICBib29sIHJlbGlhYmxlKQorewor CXN0cnVjdCBzdGFja3RyYWNlX2Nvb2tpZSAqYyA9IGNvb2tpZTsKKworCWlmIChjLT5sZW4gPj0g Yy0+c2l6ZSkKKwkJcmV0dXJuIGZhbHNlOworCisJaWYgKGMtPnNraXAgPiAwKSB7CisJCWMtPnNr aXAtLTsKKwkJcmV0dXJuIHRydWU7CisJfQorCWMtPnN0b3JlW2MtPmxlbisrXSA9IGFkZHI7CisJ cmV0dXJuIGMtPmxlbiA8IGMtPnNpemU7Cit9CisKK3N0YXRpYyBib29sIHN0YWNrX3RyYWNlX2Nv bnN1bWVfZW50cnlfbm9zY2hlZCh2b2lkICpjb29raWUsIHVuc2lnbmVkIGxvbmcgYWRkciwKKwkJ CQkJICAgICAgYm9vbCByZWxpYWJsZSkKK3sKKwlpZiAoaW5fc2NoZWRfZnVuY3Rpb25zKGFkZHIp KQorCQlyZXR1cm4gdHJ1ZTsKKwlyZXR1cm4gc3RhY2tfdHJhY2VfY29uc3VtZV9lbnRyeShjb29r aWUsIGFkZHIsIHJlbGlhYmxlKTsKK30KKworLyoqCisgKiBzdGFja190cmFjZV9zYXZlIC0gU2F2 ZSBhIHN0YWNrIHRyYWNlIGludG8gYSBzdG9yYWdlIGFycmF5CisgKiBAc3RvcmU6CVBvaW50ZXIg dG8gc3RvcmFnZSBhcnJheQorICogQHNpemU6CVNpemUgb2YgdGhlIHN0b3JhZ2UgYXJyYXkKKyAq IEBza2lwbnI6CU51bWJlciBvZiBlbnRyaWVzIHRvIHNraXAgYXQgdGhlIHN0YXJ0IG9mIHRoZSBz dGFjayB0cmFjZQorICoKKyAqIFJldHVybjogTnVtYmVyIG9mIHRyYWNlIGVudHJpZXMgc3RvcmVk LgorICovCit1bnNpZ25lZCBpbnQgc3RhY2tfdHJhY2Vfc2F2ZSh1bnNpZ25lZCBsb25nICpzdG9y ZSwgdW5zaWduZWQgaW50IHNpemUsCisJCQkgICAgICB1bnNpZ25lZCBpbnQgc2tpcG5yKQorewor CXN0YWNrX3RyYWNlX2NvbnN1bWVfZm4gY29uc3VtZV9lbnRyeSA9IHN0YWNrX3RyYWNlX2NvbnN1 bWVfZW50cnk7CisJc3RydWN0IHN0YWNrdHJhY2VfY29va2llIGMgPSB7CisJCS5zdG9yZQk9IHN0 b3JlLAorCQkuc2l6ZQk9IHNpemUsCisJCS5za2lwCT0gc2tpcG5yICsgMSwKKwl9OworCisJYXJj aF9zdGFja193YWxrKGNvbnN1bWVfZW50cnksICZjLCBjdXJyZW50LCBOVUxMKTsKKwlyZXR1cm4g Yy5sZW47Cit9CitFWFBPUlRfU1lNQk9MX0dQTChzdGFja190cmFjZV9zYXZlKTsKKworLyoqCisg KiBzdGFja190cmFjZV9zYXZlX3RzayAtIFNhdmUgYSB0YXNrIHN0YWNrIHRyYWNlIGludG8gYSBz dG9yYWdlIGFycmF5CisgKiBAdGFzazoJVGhlIHRhc2sgdG8gZXhhbWluZQorICogQHN0b3JlOglQ b2ludGVyIHRvIHN0b3JhZ2UgYXJyYXkKKyAqIEBzaXplOglTaXplIG9mIHRoZSBzdG9yYWdlIGFy cmF5CisgKiBAc2tpcG5yOglOdW1iZXIgb2YgZW50cmllcyB0byBza2lwIGF0IHRoZSBzdGFydCBv ZiB0aGUgc3RhY2sgdHJhY2UKKyAqCisgKiBSZXR1cm46IE51bWJlciBvZiB0cmFjZSBlbnRyaWVz IHN0b3JlZC4KKyAqLwordW5zaWduZWQgaW50IHN0YWNrX3RyYWNlX3NhdmVfdHNrKHN0cnVjdCB0 YXNrX3N0cnVjdCAqdHNrLCB1bnNpZ25lZCBsb25nICpzdG9yZSwKKwkJCQkgIHVuc2lnbmVkIGlu dCBzaXplLCB1bnNpZ25lZCBpbnQgc2tpcG5yKQoreworCXN0YWNrX3RyYWNlX2NvbnN1bWVfZm4g Y29uc3VtZV9lbnRyeSA9IHN0YWNrX3RyYWNlX2NvbnN1bWVfZW50cnlfbm9zY2hlZDsKKwlzdHJ1 Y3Qgc3RhY2t0cmFjZV9jb29raWUgYyA9IHsKKwkJLnN0b3JlCT0gc3RvcmUsCisJCS5zaXplCT0g c2l6ZSwKKwkJLnNraXAJPSBza2lwbnIgKyAxLAorCX07CisKKwlpZiAoIXRyeV9nZXRfdGFza19z dGFjayh0c2spKQorCQlyZXR1cm4gMDsKKworCWFyY2hfc3RhY2tfd2Fsayhjb25zdW1lX2VudHJ5 LCAmYywgdHNrLCBOVUxMKTsKKwlwdXRfdGFza19zdGFjayh0c2spOworCXJldHVybiBjLmxlbjsK K30KKworLyoqCisgKiBzdGFja190cmFjZV9zYXZlX3JlZ3MgLSBTYXZlIGEgc3RhY2sgdHJhY2Ug YmFzZWQgb24gcHRfcmVncyBpbnRvIGEgc3RvcmFnZSBhcnJheQorICogQHJlZ3M6CVBvaW50ZXIg dG8gcHRfcmVncyB0byBleGFtaW5lCisgKiBAc3RvcmU6CVBvaW50ZXIgdG8gc3RvcmFnZSBhcnJh eQorICogQHNpemU6CVNpemUgb2YgdGhlIHN0b3JhZ2UgYXJyYXkKKyAqIEBza2lwbnI6CU51bWJl ciBvZiBlbnRyaWVzIHRvIHNraXAgYXQgdGhlIHN0YXJ0IG9mIHRoZSBzdGFjayB0cmFjZQorICoK KyAqIFJldHVybjogTnVtYmVyIG9mIHRyYWNlIGVudHJpZXMgc3RvcmVkLgorICovCit1bnNpZ25l ZCBpbnQgc3RhY2tfdHJhY2Vfc2F2ZV9yZWdzKHN0cnVjdCBwdF9yZWdzICpyZWdzLCB1bnNpZ25l ZCBsb25nICpzdG9yZSwKKwkJCQkgICB1bnNpZ25lZCBpbnQgc2l6ZSwgdW5zaWduZWQgaW50IHNr aXBucikKK3sKKwlzdGFja190cmFjZV9jb25zdW1lX2ZuIGNvbnN1bWVfZW50cnkgPSBzdGFja190 cmFjZV9jb25zdW1lX2VudHJ5OworCXN0cnVjdCBzdGFja3RyYWNlX2Nvb2tpZSBjID0geworCQku c3RvcmUJPSBzdG9yZSwKKwkJLnNpemUJPSBzaXplLAorCQkuc2tpcAk9IHNraXBuciwKKwl9Owor CisJYXJjaF9zdGFja193YWxrKGNvbnN1bWVfZW50cnksICZjLCBjdXJyZW50LCByZWdzKTsKKwly ZXR1cm4gYy5sZW47Cit9CisKKyNpZmRlZiBDT05GSUdfSEFWRV9SRUxJQUJMRV9TVEFDS1RSQUNF CisvKioKKyAqIHN0YWNrX3RyYWNlX3NhdmVfdHNrX3JlbGlhYmxlIC0gU2F2ZSB0YXNrIHN0YWNr IHdpdGggdmVyaWZpY2F0aW9uCisgKiBAdHNrOglQb2ludGVyIHRvIHRoZSB0YXNrIHRvIGV4YW1p bmUKKyAqIEBzdG9yZToJUG9pbnRlciB0byBzdG9yYWdlIGFycmF5CisgKiBAc2l6ZToJU2l6ZSBv ZiB0aGUgc3RvcmFnZSBhcnJheQorICoKKyAqIFJldHVybjoJQW4gZXJyb3IgaWYgaXQgZGV0ZWN0 cyBhbnkgdW5yZWxpYWJsZSBmZWF0dXJlcyBvZiB0aGUKKyAqCQlzdGFjay4gT3RoZXJ3aXNlIGl0 IGd1YXJhbnRlZXMgdGhhdCB0aGUgc3RhY2sgdHJhY2UgaXMKKyAqCQlyZWxpYWJsZSBhbmQgcmV0 dXJucyB0aGUgbnVtYmVyIG9mIGVudHJpZXMgc3RvcmVkLgorICoKKyAqIElmIHRoZSB0YXNrIGlz IG5vdCAnY3VycmVudCcsIHRoZSBjYWxsZXIgKm11c3QqIGVuc3VyZSB0aGUgdGFzayBpcyBpbmFj dGl2ZS4KKyAqLworaW50IHN0YWNrX3RyYWNlX3NhdmVfdHNrX3JlbGlhYmxlKHN0cnVjdCB0YXNr X3N0cnVjdCAqdHNrLCB1bnNpZ25lZCBsb25nICpzdG9yZSwKKwkJCQkgIHVuc2lnbmVkIGludCBz aXplKQoreworCXN0YWNrX3RyYWNlX2NvbnN1bWVfZm4gY29uc3VtZV9lbnRyeSA9IHN0YWNrX3Ry YWNlX2NvbnN1bWVfZW50cnk7CisJc3RydWN0IHN0YWNrdHJhY2VfY29va2llIGMgPSB7CisJCS5z dG9yZQk9IHN0b3JlLAorCQkuc2l6ZQk9IHNpemUsCisJfTsKKwlpbnQgcmV0OworCisJLyoKKwkg KiBJZiB0aGUgdGFzayBkb2Vzbid0IGhhdmUgYSBzdGFjayAoZS5nLiwgYSB6b21iaWUpLCB0aGUg c3RhY2sgaXMKKwkgKiAicmVsaWFibHkiIGVtcHR5LgorCSAqLworCWlmICghdHJ5X2dldF90YXNr X3N0YWNrKHRzaykpCisJCXJldHVybiAwOworCisJcmV0ID0gYXJjaF9zdGFja193YWxrX3JlbGlh YmxlKGNvbnN1bWVfZW50cnksICZjLCB0c2spOworCXB1dF90YXNrX3N0YWNrKHRzayk7CisJcmV0 dXJuIHJldDsKK30KKyNlbmRpZgorCisjaWZkZWYgQ09ORklHX1VTRVJfU1RBQ0tUUkFDRV9TVVBQ T1JUCisvKioKKyAqIHN0YWNrX3RyYWNlX3NhdmVfdXNlciAtIFNhdmUgYSB1c2VyIHNwYWNlIHN0 YWNrIHRyYWNlIGludG8gYSBzdG9yYWdlIGFycmF5CisgKiBAc3RvcmU6CVBvaW50ZXIgdG8gc3Rv cmFnZSBhcnJheQorICogQHNpemU6CVNpemUgb2YgdGhlIHN0b3JhZ2UgYXJyYXkKKyAqCisgKiBS ZXR1cm46IE51bWJlciBvZiB0cmFjZSBlbnRyaWVzIHN0b3JlZC4KKyAqLwordW5zaWduZWQgaW50 IHN0YWNrX3RyYWNlX3NhdmVfdXNlcih1bnNpZ25lZCBsb25nICpzdG9yZSwgdW5zaWduZWQgaW50 IHNpemUpCit7CisJc3RhY2tfdHJhY2VfY29uc3VtZV9mbiBjb25zdW1lX2VudHJ5ID0gc3RhY2tf dHJhY2VfY29uc3VtZV9lbnRyeTsKKwlzdHJ1Y3Qgc3RhY2t0cmFjZV9jb29raWUgYyA9IHsKKwkJ LnN0b3JlCT0gc3RvcmUsCisJCS5zaXplCT0gc2l6ZSwKKwl9OworCisJLyogVHJhY2UgdXNlciBz dGFjayBpZiBub3QgYSBrZXJuZWwgdGhyZWFkICovCisJaWYgKCFjdXJyZW50LT5tbSkKKwkJcmV0 dXJuIDA7CisKKwlhcmNoX3N0YWNrX3dhbGtfdXNlcihjb25zdW1lX2VudHJ5LCAmYywgdGFza19w dF9yZWdzKGN1cnJlbnQpKTsKKwlyZXR1cm4gYy5sZW47Cit9CisjZW5kaWYKKworI2Vsc2UgLyog Q09ORklHX0FSQ0hfU1RBQ0tXQUxLICovCisKIC8qCiAgKiBBcmNoaXRlY3R1cmVzIHRoYXQgZG8g bm90IGltcGxlbWVudCBzYXZlX3N0YWNrX3RyYWNlXyooKQogICogZ2V0IHRoZXNlIHdlYWsgYWxp YXNlcyBhbmQgb25jZS1wZXItYm9vdHVwIHdhcm5pbmdzCkBAIC0yMDMsMyArMzc0LDUgQEAgdW5z aWduZWQgaW50IHN0YWNrX3RyYWNlX3NhdmVfdXNlcih1bnNpZwogCXJldHVybiB0cmFjZS5ucl9l bnRyaWVzOwogfQogI2VuZGlmIC8qIENPTkZJR19VU0VSX1NUQUNLVFJBQ0VfU1VQUE9SVCAqLwor CisjZW5kaWYgLyogIUNPTkZJR19BUkNIX1NUQUNLV0FMSyAqLwotLS0gYS9saWIvS2NvbmZpZwor KysgYi9saWIvS2NvbmZpZwpAQCAtNTk3LDYgKzU5NywxMCBAQCBjb25maWcgQVJDSF9IQVNfVUFD Q0VTU19GTFVTSENBQ0hFCiBjb25maWcgQVJDSF9IQVNfVUFDQ0VTU19NQ1NBRkUKIAlib29sCiAK KyMgVGVtcG9yYXJ5LiBHb2VzIGF3YXkgd2hlbiBhbGwgYXJjaHMgYXJlIGNsZWFuZWQgdXAKK2Nv bmZpZyBBUkNIX1NUQUNLV0FMSworICAgICAgIGJvb2wKKwogY29uZmlnIFNUQUNLREVQT1QKIAli b29sCiAJc2VsZWN0IFNUQUNLVFJBQ0UKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fXwpJbnRlbC1nZnggbWFpbGluZyBsaXN0CkludGVsLWdmeEBsaXN0cy5m cmVlZGVza3RvcC5vcmcKaHR0cHM6Ly9saXN0cy5mcmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0 aW5mby9pbnRlbC1nZng= From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B58D6C10F03 for ; Thu, 25 Apr 2019 10:35:22 +0000 (UTC) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8C7AD2084B for ; Thu, 25 Apr 2019 10:35:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8C7AD2084B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 825841D60; Thu, 25 Apr 2019 10:34:27 +0000 (UTC) Received: from smtp2.linuxfoundation.org (smtp2.linux-foundation.org [172.17.192.36]) by mail.linuxfoundation.org (Postfix) with ESMTPS id E14EE1D27 for ; Thu, 25 Apr 2019 10:34:01 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from Galois.linutronix.de (Galois.linutronix.de [146.0.238.70]) by smtp2.linuxfoundation.org (Postfix) with ESMTPS id C31AB1DD47 for ; Thu, 25 Apr 2019 10:33:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1hJbAP-00020j-QB; Thu, 25 Apr 2019 11:59:46 +0200 Message-Id: <20190425094803.713568606@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 25 Apr 2019 11:45:21 +0200 From: Thomas Gleixner To: LKML Subject: [patch V3 28/29] stacktrace: Provide common infrastructure References: <20190425094453.875139013@linutronix.de> MIME-Version: 1.0 Cc: Mike Snitzer , David Airlie , Catalin Marinas , Joonas Lahtinen , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Miroslav Benes , Christoph Hellwig , Alasdair Kergon , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, David Rientjes , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, Johannes Thumshirn , Maarten Lankhorst , Akinobu Mita , Steven Rostedt , Josef Bacik , Rodrigo Vivi , Mike Rapoport , Jani Nikula , Andy Lutomirski , Josh Poimboeuf , David Sterba , Dmitry Vyukov , Tom Zanussi , Chris Mason , Pekka Enberg , iommu@lists.linux-foundation.org, Daniel Vetter , Andrew Morton , Robin Murphy , linux-btrfs@vger.kernel.org X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Sender: iommu-bounces@lists.linux-foundation.org Errors-To: iommu-bounces@lists.linux-foundation.org Message-ID: <20190425094521.6BNBp8U2TQwrqwlJurWB-MxMqw4Y-IHLmNE7PlFKhW0@z> All architectures which support stacktrace carry duplicated code and do the stack storage and filtering at the architecture side. Provide a consolidated interface with a callback function for consuming the stack entries provided by the architecture specific stack walker. This removes lots of duplicated code and allows to implement better filtering than 'skip number of entries' in the future without touching any architecture specific code. Signed-off-by: Thomas Gleixner Cc: linux-arch@vger.kernel.org --- V3: Fix kernel doc --- include/linux/stacktrace.h | 39 ++++++++++ kernel/stacktrace.c | 173 +++++++++++++++++++++++++++++++++++++++++++++ lib/Kconfig | 4 + 3 files changed, 216 insertions(+) --- a/include/linux/stacktrace.h +++ b/include/linux/stacktrace.h @@ -23,6 +23,44 @@ unsigned int stack_trace_save_regs(struc unsigned int stack_trace_save_user(unsigned long *store, unsigned int size); /* Internal interfaces. Do not use in generic code */ +#ifdef CONFIG_ARCH_STACKWALK + +/** + * stack_trace_consume_fn - Callback for arch_stack_walk() + * @cookie: Caller supplied pointer handed back by arch_stack_walk() + * @addr: The stack entry address to consume + * @reliable: True when the stack entry is reliable. Required by + * some printk based consumers. + * + * Return: True, if the entry was consumed or skipped + * False, if there is no space left to store + */ +typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr, + bool reliable); +/** + * arch_stack_walk - Architecture specific function to walk the stack + * @consume_entry: Callback which is invoked by the architecture code for + * each entry. + * @cookie: Caller supplied pointer which is handed back to + * @consume_entry + * @task: Pointer to a task struct, can be NULL + * @regs: Pointer to registers, can be NULL + * + * ============ ======= ============================================ + * task regs + * ============ ======= ============================================ + * task NULL Stack trace from task (can be current) + * current regs Stack trace starting on regs->stackpointer + * ============ ======= ============================================ + */ +void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, + struct task_struct *task, struct pt_regs *regs); +int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie, + struct task_struct *task); +void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie, + const struct pt_regs *regs); + +#else /* CONFIG_ARCH_STACKWALK */ struct stack_trace { unsigned int nr_entries, max_entries; unsigned long *entries; @@ -37,6 +75,7 @@ extern void save_stack_trace_tsk(struct extern int save_stack_trace_tsk_reliable(struct task_struct *tsk, struct stack_trace *trace); extern void save_stack_trace_user(struct stack_trace *trace); +#endif /* !CONFIG_ARCH_STACKWALK */ #endif /* CONFIG_STACKTRACE */ #if defined(CONFIG_STACKTRACE) && defined(CONFIG_HAVE_RELIABLE_STACKTRACE) --- a/kernel/stacktrace.c +++ b/kernel/stacktrace.c @@ -5,6 +5,8 @@ * * Copyright (C) 2006 Red Hat, Inc., Ingo Molnar */ +#include +#include #include #include #include @@ -66,6 +68,175 @@ int stack_trace_snprint(char *buf, size_ } EXPORT_SYMBOL_GPL(stack_trace_snprint); +#ifdef CONFIG_ARCH_STACKWALK + +struct stacktrace_cookie { + unsigned long *store; + unsigned int size; + unsigned int skip; + unsigned int len; +}; + +static bool stack_trace_consume_entry(void *cookie, unsigned long addr, + bool reliable) +{ + struct stacktrace_cookie *c = cookie; + + if (c->len >= c->size) + return false; + + if (c->skip > 0) { + c->skip--; + return true; + } + c->store[c->len++] = addr; + return c->len < c->size; +} + +static bool stack_trace_consume_entry_nosched(void *cookie, unsigned long addr, + bool reliable) +{ + if (in_sched_functions(addr)) + return true; + return stack_trace_consume_entry(cookie, addr, reliable); +} + +/** + * stack_trace_save - Save a stack trace into a storage array + * @store: Pointer to storage array + * @size: Size of the storage array + * @skipnr: Number of entries to skip at the start of the stack trace + * + * Return: Number of trace entries stored. + */ +unsigned int stack_trace_save(unsigned long *store, unsigned int size, + unsigned int skipnr) +{ + stack_trace_consume_fn consume_entry = stack_trace_consume_entry; + struct stacktrace_cookie c = { + .store = store, + .size = size, + .skip = skipnr + 1, + }; + + arch_stack_walk(consume_entry, &c, current, NULL); + return c.len; +} +EXPORT_SYMBOL_GPL(stack_trace_save); + +/** + * stack_trace_save_tsk - Save a task stack trace into a storage array + * @task: The task to examine + * @store: Pointer to storage array + * @size: Size of the storage array + * @skipnr: Number of entries to skip at the start of the stack trace + * + * Return: Number of trace entries stored. + */ +unsigned int stack_trace_save_tsk(struct task_struct *tsk, unsigned long *store, + unsigned int size, unsigned int skipnr) +{ + stack_trace_consume_fn consume_entry = stack_trace_consume_entry_nosched; + struct stacktrace_cookie c = { + .store = store, + .size = size, + .skip = skipnr + 1, + }; + + if (!try_get_task_stack(tsk)) + return 0; + + arch_stack_walk(consume_entry, &c, tsk, NULL); + put_task_stack(tsk); + return c.len; +} + +/** + * stack_trace_save_regs - Save a stack trace based on pt_regs into a storage array + * @regs: Pointer to pt_regs to examine + * @store: Pointer to storage array + * @size: Size of the storage array + * @skipnr: Number of entries to skip at the start of the stack trace + * + * Return: Number of trace entries stored. + */ +unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store, + unsigned int size, unsigned int skipnr) +{ + stack_trace_consume_fn consume_entry = stack_trace_consume_entry; + struct stacktrace_cookie c = { + .store = store, + .size = size, + .skip = skipnr, + }; + + arch_stack_walk(consume_entry, &c, current, regs); + return c.len; +} + +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE +/** + * stack_trace_save_tsk_reliable - Save task stack with verification + * @tsk: Pointer to the task to examine + * @store: Pointer to storage array + * @size: Size of the storage array + * + * Return: An error if it detects any unreliable features of the + * stack. Otherwise it guarantees that the stack trace is + * reliable and returns the number of entries stored. + * + * If the task is not 'current', the caller *must* ensure the task is inactive. + */ +int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store, + unsigned int size) +{ + stack_trace_consume_fn consume_entry = stack_trace_consume_entry; + struct stacktrace_cookie c = { + .store = store, + .size = size, + }; + int ret; + + /* + * If the task doesn't have a stack (e.g., a zombie), the stack is + * "reliably" empty. + */ + if (!try_get_task_stack(tsk)) + return 0; + + ret = arch_stack_walk_reliable(consume_entry, &c, tsk); + put_task_stack(tsk); + return ret; +} +#endif + +#ifdef CONFIG_USER_STACKTRACE_SUPPORT +/** + * stack_trace_save_user - Save a user space stack trace into a storage array + * @store: Pointer to storage array + * @size: Size of the storage array + * + * Return: Number of trace entries stored. + */ +unsigned int stack_trace_save_user(unsigned long *store, unsigned int size) +{ + stack_trace_consume_fn consume_entry = stack_trace_consume_entry; + struct stacktrace_cookie c = { + .store = store, + .size = size, + }; + + /* Trace user stack if not a kernel thread */ + if (!current->mm) + return 0; + + arch_stack_walk_user(consume_entry, &c, task_pt_regs(current)); + return c.len; +} +#endif + +#else /* CONFIG_ARCH_STACKWALK */ + /* * Architectures that do not implement save_stack_trace_*() * get these weak aliases and once-per-bootup warnings @@ -203,3 +374,5 @@ unsigned int stack_trace_save_user(unsig return trace.nr_entries; } #endif /* CONFIG_USER_STACKTRACE_SUPPORT */ + +#endif /* !CONFIG_ARCH_STACKWALK */ --- a/lib/Kconfig +++ b/lib/Kconfig @@ -597,6 +597,10 @@ config ARCH_HAS_UACCESS_FLUSHCACHE config ARCH_HAS_UACCESS_MCSAFE bool +# Temporary. Goes away when all archs are cleaned up +config ARCH_STACKWALK + bool + config STACKDEPOT bool select STACKTRACE _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu