From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1E3CC282E1 for ; Thu, 25 Apr 2019 10:02:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 901C8206BA for ; Thu, 25 Apr 2019 10:02:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729270AbfDYKCI (ORCPT ); Thu, 25 Apr 2019 06:02:08 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:57987 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729430AbfDYKA0 (ORCPT ); Thu, 25 Apr 2019 06:00:26 -0400 Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1hJb9l-0001qa-F3; Thu, 25 Apr 2019 11:59:05 +0200 Message-Id: <20190425094801.324810708@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 25 Apr 2019 11:44:55 +0200 From: Thomas Gleixner To: LKML Cc: Josh Poimboeuf , x86@kernel.org, Andy Lutomirski , Steven Rostedt , Alexander Potapenko , Alexey Dobriyan , Andrew Morton , Christoph Lameter , Pekka Enberg , linux-mm@kvack.org, David Rientjes , Catalin Marinas , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Mike Rapoport , Akinobu Mita , Christoph Hellwig , iommu@lists.linux-foundation.org, Robin Murphy , Marek Szyprowski , Johannes Thumshirn , David Sterba , Chris Mason , Josef Bacik , linux-btrfs@vger.kernel.org, dm-devel@redhat.com, Mike Snitzer , Alasdair Kergon , Daniel Vetter , intel-gfx@lists.freedesktop.org, Joonas Lahtinen , Maarten Lankhorst , dri-devel@lists.freedesktop.org, David Airlie , Jani Nikula , Rodrigo Vivi , Tom Zanussi , Miroslav Benes , linux-arch@vger.kernel.org Subject: [patch V3 02/29] stacktrace: Provide helpers for common stack trace operations References: <20190425094453.875139013@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org All operations with stack traces are based on struct stack_trace. That's a horrible construct as the struct is a kitchen sink for input and output. Quite some usage sites embed it into their own data structures which creates weird indirections. There is absolutely no point in doing so. For all use cases a storage array and the number of valid stack trace entries in the array is sufficient. Provide helper functions which avoid the struct stack_trace indirection so the usage sites can be cleaned up. Signed-off-by: Thomas Gleixner --- V3: Fix kernel doc. --- include/linux/stacktrace.h | 27 +++++++ kernel/stacktrace.c | 170 +++++++++++++++++++++++++++++++++++++++++---- 2 files changed, 182 insertions(+), 15 deletions(-) --- a/include/linux/stacktrace.h +++ b/include/linux/stacktrace.h @@ -3,11 +3,26 @@ #define __LINUX_STACKTRACE_H #include +#include struct task_struct; struct pt_regs; #ifdef CONFIG_STACKTRACE +void stack_trace_print(unsigned long *trace, unsigned int nr_entries, + int spaces); +int stack_trace_snprint(char *buf, size_t size, unsigned long *entries, + unsigned int nr_entries, int spaces); +unsigned int stack_trace_save(unsigned long *store, unsigned int size, + unsigned int skipnr); +unsigned int stack_trace_save_tsk(struct task_struct *task, + unsigned long *store, unsigned int size, + unsigned int skipnr); +unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store, + unsigned int size, unsigned int skipnr); +unsigned int stack_trace_save_user(unsigned long *store, unsigned int size); + +/* Internal interfaces. Do not use in generic code */ struct stack_trace { unsigned int nr_entries, max_entries; unsigned long *entries; @@ -41,4 +56,16 @@ extern void save_stack_trace_user(struct # define save_stack_trace_tsk_reliable(tsk, trace) ({ -ENOSYS; }) #endif /* CONFIG_STACKTRACE */ +#if defined(CONFIG_STACKTRACE) && defined(CONFIG_HAVE_RELIABLE_STACKTRACE) +int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store, + unsigned int size); +#else +static inline int stack_trace_save_tsk_reliable(struct task_struct *tsk, + unsigned long *store, + unsigned int size) +{ + return -ENOSYS; +} +#endif + #endif /* __LINUX_STACKTRACE_H */ --- a/kernel/stacktrace.c +++ b/kernel/stacktrace.c @@ -11,35 +11,54 @@ #include #include -void print_stack_trace(struct stack_trace *trace, int spaces) +/** + * stack_trace_print - Print the entries in the stack trace + * @entries: Pointer to storage array + * @nr_entries: Number of entries in the storage array + * @spaces: Number of leading spaces to print + */ +void stack_trace_print(unsigned long *entries, unsigned int nr_entries, + int spaces) { - int i; + unsigned int i; - if (WARN_ON(!trace->entries)) + if (WARN_ON(!entries)) return; - for (i = 0; i < trace->nr_entries; i++) - printk("%*c%pS\n", 1 + spaces, ' ', (void *)trace->entries[i]); + for (i = 0; i < nr_entries; i++) + printk("%*c%pS\n", 1 + spaces, ' ', (void *)entries[i]); +} +EXPORT_SYMBOL_GPL(stack_trace_print); + +void print_stack_trace(struct stack_trace *trace, int spaces) +{ + stack_trace_print(trace->entries, trace->nr_entries, spaces); } EXPORT_SYMBOL_GPL(print_stack_trace); -int snprint_stack_trace(char *buf, size_t size, - struct stack_trace *trace, int spaces) +/** + * stack_trace_snprint - Print the entries in the stack trace into a buffer + * @buf: Pointer to the print buffer + * @size: Size of the print buffer + * @entries: Pointer to storage array + * @nr_entries: Number of entries in the storage array + * @spaces: Number of leading spaces to print + * + * Return: Number of bytes printed. + */ +int stack_trace_snprint(char *buf, size_t size, unsigned long *entries, + unsigned int nr_entries, int spaces) { - int i; - int generated; - int total = 0; + unsigned int generated, i, total = 0; - if (WARN_ON(!trace->entries)) + if (WARN_ON(!entries)) return 0; - for (i = 0; i < trace->nr_entries; i++) { + for (i = 0; i < nr_entries && size; i++) { generated = snprintf(buf, size, "%*c%pS\n", 1 + spaces, ' ', - (void *)trace->entries[i]); + (void *)entries[i]); total += generated; - - /* Assume that generated isn't a negative number */ if (generated >= size) { buf += size; size = 0; @@ -51,6 +70,14 @@ int snprint_stack_trace(char *buf, size_ return total; } +EXPORT_SYMBOL_GPL(stack_trace_snprint); + +int snprint_stack_trace(char *buf, size_t size, + struct stack_trace *trace, int spaces) +{ + return stack_trace_snprint(buf, size, trace->entries, + trace->nr_entries, spaces); +} EXPORT_SYMBOL_GPL(snprint_stack_trace); /* @@ -77,3 +104,116 @@ save_stack_trace_tsk_reliable(struct tas WARN_ONCE(1, KERN_INFO "save_stack_tsk_reliable() not implemented yet.\n"); return -ENOSYS; } + +/** + * stack_trace_save - Save a stack trace into a storage array + * @store: Pointer to storage array + * @size: Size of the storage array + * @skipnr: Number of entries to skip at the start of the stack trace + * + * Return: Number of trace entries stored + */ +unsigned int stack_trace_save(unsigned long *store, unsigned int size, + unsigned int skipnr) +{ + struct stack_trace trace = { + .entries = store, + .max_entries = size, + .skip = skipnr + 1, + }; + + save_stack_trace(&trace); + return trace.nr_entries; +} +EXPORT_SYMBOL_GPL(stack_trace_save); + +/** + * stack_trace_save_tsk - Save a task stack trace into a storage array + * @task: The task to examine + * @store: Pointer to storage array + * @size: Size of the storage array + * @skipnr: Number of entries to skip at the start of the stack trace + * + * Return: Number of trace entries stored + */ +unsigned int stack_trace_save_tsk(struct task_struct *task, + unsigned long *store, unsigned int size, + unsigned int skipnr) +{ + struct stack_trace trace = { + .entries = store, + .max_entries = size, + .skip = skipnr + 1, + }; + + save_stack_trace_tsk(task, &trace); + return trace.nr_entries; +} + +/** + * stack_trace_save_regs - Save a stack trace based on pt_regs into a storage array + * @regs: Pointer to pt_regs to examine + * @store: Pointer to storage array + * @size: Size of the storage array + * @skipnr: Number of entries to skip at the start of the stack trace + * + * Return: Number of trace entries stored + */ +unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store, + unsigned int size, unsigned int skipnr) +{ + struct stack_trace trace = { + .entries = store, + .max_entries = size, + .skip = skipnr, + }; + + save_stack_trace_regs(regs, &trace); + return trace.nr_entries; +} + +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE +/** + * stack_trace_save_tsk_reliable - Save task stack with verification + * @tsk: Pointer to the task to examine + * @store: Pointer to storage array + * @size: Size of the storage array + * + * Return: An error if it detects any unreliable features of the + * stack. Otherwise it guarantees that the stack trace is + * reliable and returns the number of entries stored. + * + * If the task is not 'current', the caller *must* ensure the task is inactive. + */ +int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store, + unsigned int size) +{ + struct stack_trace trace = { + .entries = store, + .max_entries = size, + }; + int ret = save_stack_trace_tsk_reliable(tsk, &trace); + + return ret ? ret : trace.nr_entries; +} +#endif + +#ifdef CONFIG_USER_STACKTRACE_SUPPORT +/** + * stack_trace_save_user - Save a user space stack trace into a storage array + * @store: Pointer to storage array + * @size: Size of the storage array + * + * Return: Number of trace entries stored + */ +unsigned int stack_trace_save_user(unsigned long *store, unsigned int size) +{ + struct stack_trace trace = { + .entries = store, + .max_entries = size, + }; + + save_stack_trace_user(&trace); + return trace.nr_entries; +} +#endif /* CONFIG_USER_STACKTRACE_SUPPORT */ From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Gleixner Subject: [patch V3 02/29] stacktrace: Provide helpers for common stack trace operations Date: Thu, 25 Apr 2019 11:44:55 +0200 Message-ID: <20190425094801.324810708@linutronix.de> References: <20190425094453.875139013@linutronix.de> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" To: LKML Cc: Mike Snitzer , David Airlie , Catalin Marinas , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Miroslav Benes , Christoph Hellwig , Alasdair Kergon , Marek Szyprowski , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, Johannes Thumshirn , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, David Rientjes , Akinobu Mita , Steven Rostedt , Josef Bacik , Mike Rapoport , Andy Lutomirski , Josh Poimboeuf List-Id: linux-arch.vger.kernel.org QWxsIG9wZXJhdGlvbnMgd2l0aCBzdGFjayB0cmFjZXMgYXJlIGJhc2VkIG9uIHN0cnVjdCBzdGFj a190cmFjZS4gVGhhdCdzIGEKaG9ycmlibGUgY29uc3RydWN0IGFzIHRoZSBzdHJ1Y3QgaXMgYSBr aXRjaGVuIHNpbmsgZm9yIGlucHV0IGFuZApvdXRwdXQuIFF1aXRlIHNvbWUgdXNhZ2Ugc2l0ZXMg ZW1iZWQgaXQgaW50byB0aGVpciBvd24gZGF0YSBzdHJ1Y3R1cmVzCndoaWNoIGNyZWF0ZXMgd2Vp cmQgaW5kaXJlY3Rpb25zLgoKVGhlcmUgaXMgYWJzb2x1dGVseSBubyBwb2ludCBpbiBkb2luZyBz by4gRm9yIGFsbCB1c2UgY2FzZXMgYSBzdG9yYWdlIGFycmF5CmFuZCB0aGUgbnVtYmVyIG9mIHZh bGlkIHN0YWNrIHRyYWNlIGVudHJpZXMgaW4gdGhlIGFycmF5IGlzIHN1ZmZpY2llbnQuCgpQcm92 aWRlIGhlbHBlciBmdW5jdGlvbnMgd2hpY2ggYXZvaWQgdGhlIHN0cnVjdCBzdGFja190cmFjZSBp bmRpcmVjdGlvbiBzbwp0aGUgdXNhZ2Ugc2l0ZXMgY2FuIGJlIGNsZWFuZWQgdXAuCgpTaWduZWQt b2ZmLWJ5OiBUaG9tYXMgR2xlaXhuZXIgPHRnbHhAbGludXRyb25peC5kZT4KLS0tClYzOiBGaXgg a2VybmVsIGRvYy4KLS0tCiBpbmNsdWRlL2xpbnV4L3N0YWNrdHJhY2UuaCB8ICAgMjcgKysrKysr Kwoga2VybmVsL3N0YWNrdHJhY2UuYyAgICAgICAgfCAgMTcwICsrKysrKysrKysrKysrKysrKysr KysrKysrKysrKysrKysrKysrKysrLS0tLQogMiBmaWxlcyBjaGFuZ2VkLCAxODIgaW5zZXJ0aW9u cygrKSwgMTUgZGVsZXRpb25zKC0pCgotLS0gYS9pbmNsdWRlL2xpbnV4L3N0YWNrdHJhY2UuaAor KysgYi9pbmNsdWRlL2xpbnV4L3N0YWNrdHJhY2UuaApAQCAtMywxMSArMywyNiBAQAogI2RlZmlu ZSBfX0xJTlVYX1NUQUNLVFJBQ0VfSAogCiAjaW5jbHVkZSA8bGludXgvdHlwZXMuaD4KKyNpbmNs dWRlIDxhc20vZXJybm8uaD4KIAogc3RydWN0IHRhc2tfc3RydWN0Owogc3RydWN0IHB0X3JlZ3M7 CiAKICNpZmRlZiBDT05GSUdfU1RBQ0tUUkFDRQordm9pZCBzdGFja190cmFjZV9wcmludCh1bnNp Z25lZCBsb25nICp0cmFjZSwgdW5zaWduZWQgaW50IG5yX2VudHJpZXMsCisJCSAgICAgICBpbnQg c3BhY2VzKTsKK2ludCBzdGFja190cmFjZV9zbnByaW50KGNoYXIgKmJ1Ziwgc2l6ZV90IHNpemUs IHVuc2lnbmVkIGxvbmcgKmVudHJpZXMsCisJCQl1bnNpZ25lZCBpbnQgbnJfZW50cmllcywgaW50 IHNwYWNlcyk7Cit1bnNpZ25lZCBpbnQgc3RhY2tfdHJhY2Vfc2F2ZSh1bnNpZ25lZCBsb25nICpz dG9yZSwgdW5zaWduZWQgaW50IHNpemUsCisJCQkgICAgICB1bnNpZ25lZCBpbnQgc2tpcG5yKTsK K3Vuc2lnbmVkIGludCBzdGFja190cmFjZV9zYXZlX3RzayhzdHJ1Y3QgdGFza19zdHJ1Y3QgKnRh c2ssCisJCQkJICB1bnNpZ25lZCBsb25nICpzdG9yZSwgdW5zaWduZWQgaW50IHNpemUsCisJCQkJ ICB1bnNpZ25lZCBpbnQgc2tpcG5yKTsKK3Vuc2lnbmVkIGludCBzdGFja190cmFjZV9zYXZlX3Jl Z3Moc3RydWN0IHB0X3JlZ3MgKnJlZ3MsIHVuc2lnbmVkIGxvbmcgKnN0b3JlLAorCQkJCSAgIHVu c2lnbmVkIGludCBzaXplLCB1bnNpZ25lZCBpbnQgc2tpcG5yKTsKK3Vuc2lnbmVkIGludCBzdGFj a190cmFjZV9zYXZlX3VzZXIodW5zaWduZWQgbG9uZyAqc3RvcmUsIHVuc2lnbmVkIGludCBzaXpl KTsKKworLyogSW50ZXJuYWwgaW50ZXJmYWNlcy4gRG8gbm90IHVzZSBpbiBnZW5lcmljIGNvZGUg Ki8KIHN0cnVjdCBzdGFja190cmFjZSB7CiAJdW5zaWduZWQgaW50IG5yX2VudHJpZXMsIG1heF9l bnRyaWVzOwogCXVuc2lnbmVkIGxvbmcgKmVudHJpZXM7CkBAIC00MSw0ICs1NiwxNiBAQCBleHRl cm4gdm9pZCBzYXZlX3N0YWNrX3RyYWNlX3VzZXIoc3RydWN0CiAjIGRlZmluZSBzYXZlX3N0YWNr X3RyYWNlX3Rza19yZWxpYWJsZSh0c2ssIHRyYWNlKQkoeyAtRU5PU1lTOyB9KQogI2VuZGlmIC8q IENPTkZJR19TVEFDS1RSQUNFICovCiAKKyNpZiBkZWZpbmVkKENPTkZJR19TVEFDS1RSQUNFKSAm JiBkZWZpbmVkKENPTkZJR19IQVZFX1JFTElBQkxFX1NUQUNLVFJBQ0UpCitpbnQgc3RhY2tfdHJh Y2Vfc2F2ZV90c2tfcmVsaWFibGUoc3RydWN0IHRhc2tfc3RydWN0ICp0c2ssIHVuc2lnbmVkIGxv bmcgKnN0b3JlLAorCQkJCSAgdW5zaWduZWQgaW50IHNpemUpOworI2Vsc2UKK3N0YXRpYyBpbmxp bmUgaW50IHN0YWNrX3RyYWNlX3NhdmVfdHNrX3JlbGlhYmxlKHN0cnVjdCB0YXNrX3N0cnVjdCAq dHNrLAorCQkJCQkJdW5zaWduZWQgbG9uZyAqc3RvcmUsCisJCQkJCQl1bnNpZ25lZCBpbnQgc2l6 ZSkKK3sKKwlyZXR1cm4gLUVOT1NZUzsKK30KKyNlbmRpZgorCiAjZW5kaWYgLyogX19MSU5VWF9T VEFDS1RSQUNFX0ggKi8KLS0tIGEva2VybmVsL3N0YWNrdHJhY2UuYworKysgYi9rZXJuZWwvc3Rh Y2t0cmFjZS5jCkBAIC0xMSwzNSArMTEsNTQgQEAKICNpbmNsdWRlIDxsaW51eC9rYWxsc3ltcy5o PgogI2luY2x1ZGUgPGxpbnV4L3N0YWNrdHJhY2UuaD4KIAotdm9pZCBwcmludF9zdGFja190cmFj ZShzdHJ1Y3Qgc3RhY2tfdHJhY2UgKnRyYWNlLCBpbnQgc3BhY2VzKQorLyoqCisgKiBzdGFja190 cmFjZV9wcmludCAtIFByaW50IHRoZSBlbnRyaWVzIGluIHRoZSBzdGFjayB0cmFjZQorICogQGVu dHJpZXM6CVBvaW50ZXIgdG8gc3RvcmFnZSBhcnJheQorICogQG5yX2VudHJpZXM6CU51bWJlciBv ZiBlbnRyaWVzIGluIHRoZSBzdG9yYWdlIGFycmF5CisgKiBAc3BhY2VzOglOdW1iZXIgb2YgbGVh ZGluZyBzcGFjZXMgdG8gcHJpbnQKKyAqLwordm9pZCBzdGFja190cmFjZV9wcmludCh1bnNpZ25l ZCBsb25nICplbnRyaWVzLCB1bnNpZ25lZCBpbnQgbnJfZW50cmllcywKKwkJICAgICAgIGludCBz cGFjZXMpCiB7Ci0JaW50IGk7CisJdW5zaWduZWQgaW50IGk7CiAKLQlpZiAoV0FSTl9PTighdHJh Y2UtPmVudHJpZXMpKQorCWlmIChXQVJOX09OKCFlbnRyaWVzKSkKIAkJcmV0dXJuOwogCi0JZm9y IChpID0gMDsgaSA8IHRyYWNlLT5ucl9lbnRyaWVzOyBpKyspCi0JCXByaW50aygiJSpjJXBTXG4i LCAxICsgc3BhY2VzLCAnICcsICh2b2lkICopdHJhY2UtPmVudHJpZXNbaV0pOworCWZvciAoaSA9 IDA7IGkgPCBucl9lbnRyaWVzOyBpKyspCisJCXByaW50aygiJSpjJXBTXG4iLCAxICsgc3BhY2Vz LCAnICcsICh2b2lkICopZW50cmllc1tpXSk7Cit9CitFWFBPUlRfU1lNQk9MX0dQTChzdGFja190 cmFjZV9wcmludCk7CisKK3ZvaWQgcHJpbnRfc3RhY2tfdHJhY2Uoc3RydWN0IHN0YWNrX3RyYWNl ICp0cmFjZSwgaW50IHNwYWNlcykKK3sKKwlzdGFja190cmFjZV9wcmludCh0cmFjZS0+ZW50cmll cywgdHJhY2UtPm5yX2VudHJpZXMsIHNwYWNlcyk7CiB9CiBFWFBPUlRfU1lNQk9MX0dQTChwcmlu dF9zdGFja190cmFjZSk7CiAKLWludCBzbnByaW50X3N0YWNrX3RyYWNlKGNoYXIgKmJ1Ziwgc2l6 ZV90IHNpemUsCi0JCQlzdHJ1Y3Qgc3RhY2tfdHJhY2UgKnRyYWNlLCBpbnQgc3BhY2VzKQorLyoq CisgKiBzdGFja190cmFjZV9zbnByaW50IC0gUHJpbnQgdGhlIGVudHJpZXMgaW4gdGhlIHN0YWNr IHRyYWNlIGludG8gYSBidWZmZXIKKyAqIEBidWY6CVBvaW50ZXIgdG8gdGhlIHByaW50IGJ1ZmZl cgorICogQHNpemU6CVNpemUgb2YgdGhlIHByaW50IGJ1ZmZlcgorICogQGVudHJpZXM6CVBvaW50 ZXIgdG8gc3RvcmFnZSBhcnJheQorICogQG5yX2VudHJpZXM6CU51bWJlciBvZiBlbnRyaWVzIGlu IHRoZSBzdG9yYWdlIGFycmF5CisgKiBAc3BhY2VzOglOdW1iZXIgb2YgbGVhZGluZyBzcGFjZXMg dG8gcHJpbnQKKyAqCisgKiBSZXR1cm46IE51bWJlciBvZiBieXRlcyBwcmludGVkLgorICovCitp bnQgc3RhY2tfdHJhY2Vfc25wcmludChjaGFyICpidWYsIHNpemVfdCBzaXplLCB1bnNpZ25lZCBs b25nICplbnRyaWVzLAorCQkJdW5zaWduZWQgaW50IG5yX2VudHJpZXMsIGludCBzcGFjZXMpCiB7 Ci0JaW50IGk7Ci0JaW50IGdlbmVyYXRlZDsKLQlpbnQgdG90YWwgPSAwOworCXVuc2lnbmVkIGlu dCBnZW5lcmF0ZWQsIGksIHRvdGFsID0gMDsKIAotCWlmIChXQVJOX09OKCF0cmFjZS0+ZW50cmll cykpCisJaWYgKFdBUk5fT04oIWVudHJpZXMpKQogCQlyZXR1cm4gMDsKIAotCWZvciAoaSA9IDA7 IGkgPCB0cmFjZS0+bnJfZW50cmllczsgaSsrKSB7CisJZm9yIChpID0gMDsgaSA8IG5yX2VudHJp ZXMgJiYgc2l6ZTsgaSsrKSB7CiAJCWdlbmVyYXRlZCA9IHNucHJpbnRmKGJ1Ziwgc2l6ZSwgIiUq YyVwU1xuIiwgMSArIHNwYWNlcywgJyAnLAotCQkJCSAgICAgKHZvaWQgKil0cmFjZS0+ZW50cmll c1tpXSk7CisJCQkJICAgICAodm9pZCAqKWVudHJpZXNbaV0pOwogCiAJCXRvdGFsICs9IGdlbmVy YXRlZDsKLQotCQkvKiBBc3N1bWUgdGhhdCBnZW5lcmF0ZWQgaXNuJ3QgYSBuZWdhdGl2ZSBudW1i ZXIgKi8KIAkJaWYgKGdlbmVyYXRlZCA+PSBzaXplKSB7CiAJCQlidWYgKz0gc2l6ZTsKIAkJCXNp emUgPSAwOwpAQCAtNTEsNiArNzAsMTQgQEAgaW50IHNucHJpbnRfc3RhY2tfdHJhY2UoY2hhciAq YnVmLCBzaXplXwogCiAJcmV0dXJuIHRvdGFsOwogfQorRVhQT1JUX1NZTUJPTF9HUEwoc3RhY2tf dHJhY2Vfc25wcmludCk7CisKK2ludCBzbnByaW50X3N0YWNrX3RyYWNlKGNoYXIgKmJ1Ziwgc2l6 ZV90IHNpemUsCisJCQlzdHJ1Y3Qgc3RhY2tfdHJhY2UgKnRyYWNlLCBpbnQgc3BhY2VzKQorewor CXJldHVybiBzdGFja190cmFjZV9zbnByaW50KGJ1Ziwgc2l6ZSwgdHJhY2UtPmVudHJpZXMsCisJ CQkJICAgdHJhY2UtPm5yX2VudHJpZXMsIHNwYWNlcyk7Cit9CiBFWFBPUlRfU1lNQk9MX0dQTChz bnByaW50X3N0YWNrX3RyYWNlKTsKIAogLyoKQEAgLTc3LDMgKzEwNCwxMTYgQEAgc2F2ZV9zdGFj a190cmFjZV90c2tfcmVsaWFibGUoc3RydWN0IHRhcwogCVdBUk5fT05DRSgxLCBLRVJOX0lORk8g InNhdmVfc3RhY2tfdHNrX3JlbGlhYmxlKCkgbm90IGltcGxlbWVudGVkIHlldC5cbiIpOwogCXJl dHVybiAtRU5PU1lTOwogfQorCisvKioKKyAqIHN0YWNrX3RyYWNlX3NhdmUgLSBTYXZlIGEgc3Rh Y2sgdHJhY2UgaW50byBhIHN0b3JhZ2UgYXJyYXkKKyAqIEBzdG9yZToJUG9pbnRlciB0byBzdG9y YWdlIGFycmF5CisgKiBAc2l6ZToJU2l6ZSBvZiB0aGUgc3RvcmFnZSBhcnJheQorICogQHNraXBu cjoJTnVtYmVyIG9mIGVudHJpZXMgdG8gc2tpcCBhdCB0aGUgc3RhcnQgb2YgdGhlIHN0YWNrIHRy YWNlCisgKgorICogUmV0dXJuOiBOdW1iZXIgb2YgdHJhY2UgZW50cmllcyBzdG9yZWQKKyAqLwor dW5zaWduZWQgaW50IHN0YWNrX3RyYWNlX3NhdmUodW5zaWduZWQgbG9uZyAqc3RvcmUsIHVuc2ln bmVkIGludCBzaXplLAorCQkJICAgICAgdW5zaWduZWQgaW50IHNraXBucikKK3sKKwlzdHJ1Y3Qg c3RhY2tfdHJhY2UgdHJhY2UgPSB7CisJCS5lbnRyaWVzCT0gc3RvcmUsCisJCS5tYXhfZW50cmll cwk9IHNpemUsCisJCS5za2lwCQk9IHNraXBuciArIDEsCisJfTsKKworCXNhdmVfc3RhY2tfdHJh Y2UoJnRyYWNlKTsKKwlyZXR1cm4gdHJhY2UubnJfZW50cmllczsKK30KK0VYUE9SVF9TWU1CT0xf R1BMKHN0YWNrX3RyYWNlX3NhdmUpOworCisvKioKKyAqIHN0YWNrX3RyYWNlX3NhdmVfdHNrIC0g U2F2ZSBhIHRhc2sgc3RhY2sgdHJhY2UgaW50byBhIHN0b3JhZ2UgYXJyYXkKKyAqIEB0YXNrOglU aGUgdGFzayB0byBleGFtaW5lCisgKiBAc3RvcmU6CVBvaW50ZXIgdG8gc3RvcmFnZSBhcnJheQor ICogQHNpemU6CVNpemUgb2YgdGhlIHN0b3JhZ2UgYXJyYXkKKyAqIEBza2lwbnI6CU51bWJlciBv ZiBlbnRyaWVzIHRvIHNraXAgYXQgdGhlIHN0YXJ0IG9mIHRoZSBzdGFjayB0cmFjZQorICoKKyAq IFJldHVybjogTnVtYmVyIG9mIHRyYWNlIGVudHJpZXMgc3RvcmVkCisgKi8KK3Vuc2lnbmVkIGlu dCBzdGFja190cmFjZV9zYXZlX3RzayhzdHJ1Y3QgdGFza19zdHJ1Y3QgKnRhc2ssCisJCQkJICB1 bnNpZ25lZCBsb25nICpzdG9yZSwgdW5zaWduZWQgaW50IHNpemUsCisJCQkJICB1bnNpZ25lZCBp bnQgc2tpcG5yKQoreworCXN0cnVjdCBzdGFja190cmFjZSB0cmFjZSA9IHsKKwkJLmVudHJpZXMJ PSBzdG9yZSwKKwkJLm1heF9lbnRyaWVzCT0gc2l6ZSwKKwkJLnNraXAJCT0gc2tpcG5yICsgMSwK Kwl9OworCisJc2F2ZV9zdGFja190cmFjZV90c2sodGFzaywgJnRyYWNlKTsKKwlyZXR1cm4gdHJh Y2UubnJfZW50cmllczsKK30KKworLyoqCisgKiBzdGFja190cmFjZV9zYXZlX3JlZ3MgLSBTYXZl IGEgc3RhY2sgdHJhY2UgYmFzZWQgb24gcHRfcmVncyBpbnRvIGEgc3RvcmFnZSBhcnJheQorICog QHJlZ3M6CVBvaW50ZXIgdG8gcHRfcmVncyB0byBleGFtaW5lCisgKiBAc3RvcmU6CVBvaW50ZXIg dG8gc3RvcmFnZSBhcnJheQorICogQHNpemU6CVNpemUgb2YgdGhlIHN0b3JhZ2UgYXJyYXkKKyAq IEBza2lwbnI6CU51bWJlciBvZiBlbnRyaWVzIHRvIHNraXAgYXQgdGhlIHN0YXJ0IG9mIHRoZSBz dGFjayB0cmFjZQorICoKKyAqIFJldHVybjogTnVtYmVyIG9mIHRyYWNlIGVudHJpZXMgc3RvcmVk CisgKi8KK3Vuc2lnbmVkIGludCBzdGFja190cmFjZV9zYXZlX3JlZ3Moc3RydWN0IHB0X3JlZ3Mg KnJlZ3MsIHVuc2lnbmVkIGxvbmcgKnN0b3JlLAorCQkJCSAgIHVuc2lnbmVkIGludCBzaXplLCB1 bnNpZ25lZCBpbnQgc2tpcG5yKQoreworCXN0cnVjdCBzdGFja190cmFjZSB0cmFjZSA9IHsKKwkJ LmVudHJpZXMJPSBzdG9yZSwKKwkJLm1heF9lbnRyaWVzCT0gc2l6ZSwKKwkJLnNraXAJCT0gc2tp cG5yLAorCX07CisKKwlzYXZlX3N0YWNrX3RyYWNlX3JlZ3MocmVncywgJnRyYWNlKTsKKwlyZXR1 cm4gdHJhY2UubnJfZW50cmllczsKK30KKworI2lmZGVmIENPTkZJR19IQVZFX1JFTElBQkxFX1NU QUNLVFJBQ0UKKy8qKgorICogc3RhY2tfdHJhY2Vfc2F2ZV90c2tfcmVsaWFibGUgLSBTYXZlIHRh c2sgc3RhY2sgd2l0aCB2ZXJpZmljYXRpb24KKyAqIEB0c2s6CVBvaW50ZXIgdG8gdGhlIHRhc2sg dG8gZXhhbWluZQorICogQHN0b3JlOglQb2ludGVyIHRvIHN0b3JhZ2UgYXJyYXkKKyAqIEBzaXpl OglTaXplIG9mIHRoZSBzdG9yYWdlIGFycmF5CisgKgorICogUmV0dXJuOglBbiBlcnJvciBpZiBp dCBkZXRlY3RzIGFueSB1bnJlbGlhYmxlIGZlYXR1cmVzIG9mIHRoZQorICoJCXN0YWNrLiBPdGhl cndpc2UgaXQgZ3VhcmFudGVlcyB0aGF0IHRoZSBzdGFjayB0cmFjZSBpcworICoJCXJlbGlhYmxl IGFuZCByZXR1cm5zIHRoZSBudW1iZXIgb2YgZW50cmllcyBzdG9yZWQuCisgKgorICogSWYgdGhl IHRhc2sgaXMgbm90ICdjdXJyZW50JywgdGhlIGNhbGxlciAqbXVzdCogZW5zdXJlIHRoZSB0YXNr IGlzIGluYWN0aXZlLgorICovCitpbnQgc3RhY2tfdHJhY2Vfc2F2ZV90c2tfcmVsaWFibGUoc3Ry dWN0IHRhc2tfc3RydWN0ICp0c2ssIHVuc2lnbmVkIGxvbmcgKnN0b3JlLAorCQkJCSAgdW5zaWdu ZWQgaW50IHNpemUpCit7CisJc3RydWN0IHN0YWNrX3RyYWNlIHRyYWNlID0geworCQkuZW50cmll cwk9IHN0b3JlLAorCQkubWF4X2VudHJpZXMJPSBzaXplLAorCX07CisJaW50IHJldCA9IHNhdmVf c3RhY2tfdHJhY2VfdHNrX3JlbGlhYmxlKHRzaywgJnRyYWNlKTsKKworCXJldHVybiByZXQgPyBy ZXQgOiB0cmFjZS5ucl9lbnRyaWVzOworfQorI2VuZGlmCisKKyNpZmRlZiBDT05GSUdfVVNFUl9T VEFDS1RSQUNFX1NVUFBPUlQKKy8qKgorICogc3RhY2tfdHJhY2Vfc2F2ZV91c2VyIC0gU2F2ZSBh IHVzZXIgc3BhY2Ugc3RhY2sgdHJhY2UgaW50byBhIHN0b3JhZ2UgYXJyYXkKKyAqIEBzdG9yZToJ UG9pbnRlciB0byBzdG9yYWdlIGFycmF5CisgKiBAc2l6ZToJU2l6ZSBvZiB0aGUgc3RvcmFnZSBh cnJheQorICoKKyAqIFJldHVybjogTnVtYmVyIG9mIHRyYWNlIGVudHJpZXMgc3RvcmVkCisgKi8K K3Vuc2lnbmVkIGludCBzdGFja190cmFjZV9zYXZlX3VzZXIodW5zaWduZWQgbG9uZyAqc3RvcmUs IHVuc2lnbmVkIGludCBzaXplKQoreworCXN0cnVjdCBzdGFja190cmFjZSB0cmFjZSA9IHsKKwkJ LmVudHJpZXMJPSBzdG9yZSwKKwkJLm1heF9lbnRyaWVzCT0gc2l6ZSwKKwl9OworCisJc2F2ZV9z dGFja190cmFjZV91c2VyKCZ0cmFjZSk7CisJcmV0dXJuIHRyYWNlLm5yX2VudHJpZXM7Cit9Cisj ZW5kaWYgLyogQ09ORklHX1VTRVJfU1RBQ0tUUkFDRV9TVVBQT1JUICovCgoKX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KSW50ZWwtZ2Z4IG1haWxpbmcgbGlz dApJbnRlbC1nZnhAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJlZWRlc2t0 b3Aub3JnL21haWxtYW4vbGlzdGluZm8vaW50ZWwtZ2Z4 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78169C282E1 for ; Thu, 25 Apr 2019 10:35:46 +0000 (UTC) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4F8DB217D7 for ; Thu, 25 Apr 2019 10:35:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F8DB217D7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 78EEE1D64; Thu, 25 Apr 2019 10:34:49 +0000 (UTC) Received: from smtp2.linuxfoundation.org (smtp2.linux-foundation.org [172.17.192.36]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 81C151D27 for ; Thu, 25 Apr 2019 10:34:23 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from Galois.linutronix.de (Galois.linutronix.de [146.0.238.70]) by smtp2.linuxfoundation.org (Postfix) with ESMTPS id 87D7F1DD44 for ; Thu, 25 Apr 2019 10:34:22 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1hJb9l-0001qa-F3; Thu, 25 Apr 2019 11:59:05 +0200 Message-Id: <20190425094801.324810708@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 25 Apr 2019 11:44:55 +0200 From: Thomas Gleixner To: LKML Subject: [patch V3 02/29] stacktrace: Provide helpers for common stack trace operations References: <20190425094453.875139013@linutronix.de> MIME-Version: 1.0 Cc: Mike Snitzer , David Airlie , Catalin Marinas , Joonas Lahtinen , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Miroslav Benes , Christoph Hellwig , Alasdair Kergon , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, Johannes Thumshirn , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, David Rientjes , Maarten Lankhorst , Akinobu Mita , Steven Rostedt , Josef Bacik , Rodrigo Vivi , Mike Rapoport , Jani Nikula , Andy Lutomirski , Josh Poimboeuf , David Sterba , Dmitry Vyukov , Tom Zanussi , Chris Mason , Pekka Enberg , iommu@lists.linux-foundation.org, Daniel Vetter , Andrew Morton , Robin Murphy , linux-btrfs@vger.kernel.org X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Sender: iommu-bounces@lists.linux-foundation.org Errors-To: iommu-bounces@lists.linux-foundation.org Message-ID: <20190425094455.gz1l_k72qE6o8DqSUZyEKGxGYUxXCOfQ79tMXwg9x00@z> All operations with stack traces are based on struct stack_trace. That's a horrible construct as the struct is a kitchen sink for input and output. Quite some usage sites embed it into their own data structures which creates weird indirections. There is absolutely no point in doing so. For all use cases a storage array and the number of valid stack trace entries in the array is sufficient. Provide helper functions which avoid the struct stack_trace indirection so the usage sites can be cleaned up. Signed-off-by: Thomas Gleixner --- V3: Fix kernel doc. --- include/linux/stacktrace.h | 27 +++++++ kernel/stacktrace.c | 170 +++++++++++++++++++++++++++++++++++++++++---- 2 files changed, 182 insertions(+), 15 deletions(-) --- a/include/linux/stacktrace.h +++ b/include/linux/stacktrace.h @@ -3,11 +3,26 @@ #define __LINUX_STACKTRACE_H #include +#include struct task_struct; struct pt_regs; #ifdef CONFIG_STACKTRACE +void stack_trace_print(unsigned long *trace, unsigned int nr_entries, + int spaces); +int stack_trace_snprint(char *buf, size_t size, unsigned long *entries, + unsigned int nr_entries, int spaces); +unsigned int stack_trace_save(unsigned long *store, unsigned int size, + unsigned int skipnr); +unsigned int stack_trace_save_tsk(struct task_struct *task, + unsigned long *store, unsigned int size, + unsigned int skipnr); +unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store, + unsigned int size, unsigned int skipnr); +unsigned int stack_trace_save_user(unsigned long *store, unsigned int size); + +/* Internal interfaces. Do not use in generic code */ struct stack_trace { unsigned int nr_entries, max_entries; unsigned long *entries; @@ -41,4 +56,16 @@ extern void save_stack_trace_user(struct # define save_stack_trace_tsk_reliable(tsk, trace) ({ -ENOSYS; }) #endif /* CONFIG_STACKTRACE */ +#if defined(CONFIG_STACKTRACE) && defined(CONFIG_HAVE_RELIABLE_STACKTRACE) +int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store, + unsigned int size); +#else +static inline int stack_trace_save_tsk_reliable(struct task_struct *tsk, + unsigned long *store, + unsigned int size) +{ + return -ENOSYS; +} +#endif + #endif /* __LINUX_STACKTRACE_H */ --- a/kernel/stacktrace.c +++ b/kernel/stacktrace.c @@ -11,35 +11,54 @@ #include #include -void print_stack_trace(struct stack_trace *trace, int spaces) +/** + * stack_trace_print - Print the entries in the stack trace + * @entries: Pointer to storage array + * @nr_entries: Number of entries in the storage array + * @spaces: Number of leading spaces to print + */ +void stack_trace_print(unsigned long *entries, unsigned int nr_entries, + int spaces) { - int i; + unsigned int i; - if (WARN_ON(!trace->entries)) + if (WARN_ON(!entries)) return; - for (i = 0; i < trace->nr_entries; i++) - printk("%*c%pS\n", 1 + spaces, ' ', (void *)trace->entries[i]); + for (i = 0; i < nr_entries; i++) + printk("%*c%pS\n", 1 + spaces, ' ', (void *)entries[i]); +} +EXPORT_SYMBOL_GPL(stack_trace_print); + +void print_stack_trace(struct stack_trace *trace, int spaces) +{ + stack_trace_print(trace->entries, trace->nr_entries, spaces); } EXPORT_SYMBOL_GPL(print_stack_trace); -int snprint_stack_trace(char *buf, size_t size, - struct stack_trace *trace, int spaces) +/** + * stack_trace_snprint - Print the entries in the stack trace into a buffer + * @buf: Pointer to the print buffer + * @size: Size of the print buffer + * @entries: Pointer to storage array + * @nr_entries: Number of entries in the storage array + * @spaces: Number of leading spaces to print + * + * Return: Number of bytes printed. + */ +int stack_trace_snprint(char *buf, size_t size, unsigned long *entries, + unsigned int nr_entries, int spaces) { - int i; - int generated; - int total = 0; + unsigned int generated, i, total = 0; - if (WARN_ON(!trace->entries)) + if (WARN_ON(!entries)) return 0; - for (i = 0; i < trace->nr_entries; i++) { + for (i = 0; i < nr_entries && size; i++) { generated = snprintf(buf, size, "%*c%pS\n", 1 + spaces, ' ', - (void *)trace->entries[i]); + (void *)entries[i]); total += generated; - - /* Assume that generated isn't a negative number */ if (generated >= size) { buf += size; size = 0; @@ -51,6 +70,14 @@ int snprint_stack_trace(char *buf, size_ return total; } +EXPORT_SYMBOL_GPL(stack_trace_snprint); + +int snprint_stack_trace(char *buf, size_t size, + struct stack_trace *trace, int spaces) +{ + return stack_trace_snprint(buf, size, trace->entries, + trace->nr_entries, spaces); +} EXPORT_SYMBOL_GPL(snprint_stack_trace); /* @@ -77,3 +104,116 @@ save_stack_trace_tsk_reliable(struct tas WARN_ONCE(1, KERN_INFO "save_stack_tsk_reliable() not implemented yet.\n"); return -ENOSYS; } + +/** + * stack_trace_save - Save a stack trace into a storage array + * @store: Pointer to storage array + * @size: Size of the storage array + * @skipnr: Number of entries to skip at the start of the stack trace + * + * Return: Number of trace entries stored + */ +unsigned int stack_trace_save(unsigned long *store, unsigned int size, + unsigned int skipnr) +{ + struct stack_trace trace = { + .entries = store, + .max_entries = size, + .skip = skipnr + 1, + }; + + save_stack_trace(&trace); + return trace.nr_entries; +} +EXPORT_SYMBOL_GPL(stack_trace_save); + +/** + * stack_trace_save_tsk - Save a task stack trace into a storage array + * @task: The task to examine + * @store: Pointer to storage array + * @size: Size of the storage array + * @skipnr: Number of entries to skip at the start of the stack trace + * + * Return: Number of trace entries stored + */ +unsigned int stack_trace_save_tsk(struct task_struct *task, + unsigned long *store, unsigned int size, + unsigned int skipnr) +{ + struct stack_trace trace = { + .entries = store, + .max_entries = size, + .skip = skipnr + 1, + }; + + save_stack_trace_tsk(task, &trace); + return trace.nr_entries; +} + +/** + * stack_trace_save_regs - Save a stack trace based on pt_regs into a storage array + * @regs: Pointer to pt_regs to examine + * @store: Pointer to storage array + * @size: Size of the storage array + * @skipnr: Number of entries to skip at the start of the stack trace + * + * Return: Number of trace entries stored + */ +unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store, + unsigned int size, unsigned int skipnr) +{ + struct stack_trace trace = { + .entries = store, + .max_entries = size, + .skip = skipnr, + }; + + save_stack_trace_regs(regs, &trace); + return trace.nr_entries; +} + +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE +/** + * stack_trace_save_tsk_reliable - Save task stack with verification + * @tsk: Pointer to the task to examine + * @store: Pointer to storage array + * @size: Size of the storage array + * + * Return: An error if it detects any unreliable features of the + * stack. Otherwise it guarantees that the stack trace is + * reliable and returns the number of entries stored. + * + * If the task is not 'current', the caller *must* ensure the task is inactive. + */ +int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store, + unsigned int size) +{ + struct stack_trace trace = { + .entries = store, + .max_entries = size, + }; + int ret = save_stack_trace_tsk_reliable(tsk, &trace); + + return ret ? ret : trace.nr_entries; +} +#endif + +#ifdef CONFIG_USER_STACKTRACE_SUPPORT +/** + * stack_trace_save_user - Save a user space stack trace into a storage array + * @store: Pointer to storage array + * @size: Size of the storage array + * + * Return: Number of trace entries stored + */ +unsigned int stack_trace_save_user(unsigned long *store, unsigned int size) +{ + struct stack_trace trace = { + .entries = store, + .max_entries = size, + }; + + save_stack_trace_user(&trace); + return trace.nr_entries; +} +#endif /* CONFIG_USER_STACKTRACE_SUPPORT */ _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu