From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 152ADC10F03 for ; Thu, 25 Apr 2019 10:01:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D11E2217D7 for ; Thu, 25 Apr 2019 10:01:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728369AbfDYKBQ (ORCPT ); Thu, 25 Apr 2019 06:01:16 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:58115 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729883AbfDYKBC (ORCPT ); Thu, 25 Apr 2019 06:01:02 -0400 Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1hJb9m-0001qd-PK; Thu, 25 Apr 2019 11:59:06 +0200 Message-Id: <20190425094801.414574828@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 25 Apr 2019 11:44:56 +0200 From: Thomas Gleixner To: LKML Cc: Josh Poimboeuf , x86@kernel.org, Andy Lutomirski , Alexander Potapenko , Steven Rostedt , Alexey Dobriyan , Andrew Morton , Christoph Lameter , Pekka Enberg , linux-mm@kvack.org, David Rientjes , Catalin Marinas , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Mike Rapoport , Akinobu Mita , Christoph Hellwig , iommu@lists.linux-foundation.org, Robin Murphy , Marek Szyprowski , Johannes Thumshirn , David Sterba , Chris Mason , Josef Bacik , linux-btrfs@vger.kernel.org, dm-devel@redhat.com, Mike Snitzer , Alasdair Kergon , Daniel Vetter , intel-gfx@lists.freedesktop.org, Joonas Lahtinen , Maarten Lankhorst , dri-devel@lists.freedesktop.org, David Airlie , Jani Nikula , Rodrigo Vivi , Tom Zanussi , Miroslav Benes , linux-arch@vger.kernel.org Subject: [patch V3 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays References: <20190425094453.875139013@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The struct stack_trace indirection in the stack depot functions is a truly pointless excercise which requires horrible code at the callsites. Provide interfaces based on plain storage arrays. Signed-off-by: Thomas Gleixner Acked-by: Alexander Potapenko --- V3: Fix kernel-doc --- include/linux/stackdepot.h | 4 ++ lib/stackdepot.c | 70 ++++++++++++++++++++++++++++++++------------- 2 files changed, 55 insertions(+), 19 deletions(-) --- a/include/linux/stackdepot.h +++ b/include/linux/stackdepot.h @@ -26,7 +26,11 @@ typedef u32 depot_stack_handle_t; struct stack_trace; depot_stack_handle_t depot_save_stack(struct stack_trace *trace, gfp_t flags); +depot_stack_handle_t stack_depot_save(unsigned long *entries, + unsigned int nr_entries, gfp_t gfp_flags); void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace); +unsigned int stack_depot_fetch(depot_stack_handle_t handle, + unsigned long **entries); #endif --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -194,40 +194,60 @@ static inline struct stack_record *find_ return NULL; } -void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace) +/** + * stack_depot_fetch - Fetch stack entries from a depot + * + * @handle: Stack depot handle which was returned from + * stack_depot_save(). + * @entries: Pointer to store the entries address + * + * Return: The number of trace entries for this depot. + */ +unsigned int stack_depot_fetch(depot_stack_handle_t handle, + unsigned long **entries) { union handle_parts parts = { .handle = handle }; void *slab = stack_slabs[parts.slabindex]; size_t offset = parts.offset << STACK_ALLOC_ALIGN; struct stack_record *stack = slab + offset; - trace->nr_entries = trace->max_entries = stack->size; - trace->entries = stack->entries; - trace->skip = 0; + *entries = stack->entries; + return stack->size; +} +EXPORT_SYMBOL_GPL(stack_depot_fetch); + +void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace) +{ + unsigned int nent = stack_depot_fetch(handle, &trace->entries); + + trace->max_entries = trace->nr_entries = nent; } EXPORT_SYMBOL_GPL(depot_fetch_stack); /** - * depot_save_stack - save stack in a stack depot. - * @trace - the stacktrace to save. - * @alloc_flags - flags for allocating additional memory if required. + * stack_depot_save - Save a stack trace from an array + * + * @entries: Pointer to storage array + * @nr_entries: Size of the storage array + * @alloc_flags: Allocation gfp flags * - * Returns the handle of the stack struct stored in depot. + * Return: The handle of the stack struct stored in depot */ -depot_stack_handle_t depot_save_stack(struct stack_trace *trace, - gfp_t alloc_flags) +depot_stack_handle_t stack_depot_save(unsigned long *entries, + unsigned int nr_entries, + gfp_t alloc_flags) { - u32 hash; - depot_stack_handle_t retval = 0; struct stack_record *found = NULL, **bucket; - unsigned long flags; + depot_stack_handle_t retval = 0; struct page *page = NULL; void *prealloc = NULL; + unsigned long flags; + u32 hash; - if (unlikely(trace->nr_entries == 0)) + if (unlikely(nr_entries == 0)) goto fast_exit; - hash = hash_stack(trace->entries, trace->nr_entries); + hash = hash_stack(entries, nr_entries); bucket = &stack_table[hash & STACK_HASH_MASK]; /* @@ -235,8 +255,8 @@ depot_stack_handle_t depot_save_stack(st * The smp_load_acquire() here pairs with smp_store_release() to * |bucket| below. */ - found = find_stack(smp_load_acquire(bucket), trace->entries, - trace->nr_entries, hash); + found = find_stack(smp_load_acquire(bucket), entries, + nr_entries, hash); if (found) goto exit; @@ -264,10 +284,10 @@ depot_stack_handle_t depot_save_stack(st spin_lock_irqsave(&depot_lock, flags); - found = find_stack(*bucket, trace->entries, trace->nr_entries, hash); + found = find_stack(*bucket, entries, nr_entries, hash); if (!found) { struct stack_record *new = - depot_alloc_stack(trace->entries, trace->nr_entries, + depot_alloc_stack(entries, nr_entries, hash, &prealloc, alloc_flags); if (new) { new->next = *bucket; @@ -297,4 +317,16 @@ depot_stack_handle_t depot_save_stack(st fast_exit: return retval; } +EXPORT_SYMBOL_GPL(stack_depot_save); + +/** + * depot_save_stack - save stack in a stack depot. + * @trace - the stacktrace to save. + * @alloc_flags - flags for allocating additional memory if required. + */ +depot_stack_handle_t depot_save_stack(struct stack_trace *trace, + gfp_t alloc_flags) +{ + return stack_depot_save(trace->entries, trace->nr_entries, alloc_flags); +} EXPORT_SYMBOL_GPL(depot_save_stack); From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Gleixner Subject: [patch V3 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays Date: Thu, 25 Apr 2019 11:44:56 +0200 Message-ID: <20190425094801.414574828@linutronix.de> References: <20190425094453.875139013@linutronix.de> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" To: LKML Cc: Mike Snitzer , David Airlie , Catalin Marinas , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Miroslav Benes , Christoph Hellwig , Alasdair Kergon , Marek Szyprowski , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, Johannes Thumshirn , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, David Rientjes , Akinobu Mita , Steven Rostedt , Josef Bacik , Mike Rapoport , Andy Lutomirski , Josh Poimboeuf List-Id: linux-arch.vger.kernel.org VGhlIHN0cnVjdCBzdGFja190cmFjZSBpbmRpcmVjdGlvbiBpbiB0aGUgc3RhY2sgZGVwb3QgZnVu Y3Rpb25zIGlzIGEgdHJ1bHkKcG9pbnRsZXNzIGV4Y2VyY2lzZSB3aGljaCByZXF1aXJlcyBob3Jy aWJsZSBjb2RlIGF0IHRoZSBjYWxsc2l0ZXMuCgpQcm92aWRlIGludGVyZmFjZXMgYmFzZWQgb24g cGxhaW4gc3RvcmFnZSBhcnJheXMuCgpTaWduZWQtb2ZmLWJ5OiBUaG9tYXMgR2xlaXhuZXIgPHRn bHhAbGludXRyb25peC5kZT4KQWNrZWQtYnk6IEFsZXhhbmRlciBQb3RhcGVua28gPGdsaWRlckBn b29nbGUuY29tPgotLS0KVjM6IEZpeCBrZXJuZWwtZG9jCi0tLQogaW5jbHVkZS9saW51eC9zdGFj a2RlcG90LmggfCAgICA0ICsrCiBsaWIvc3RhY2tkZXBvdC5jICAgICAgICAgICB8ICAgNzAgKysr KysrKysrKysrKysrKysrKysrKysrKysrKysrKystLS0tLS0tLS0tLS0tCiAyIGZpbGVzIGNoYW5n ZWQsIDU1IGluc2VydGlvbnMoKyksIDE5IGRlbGV0aW9ucygtKQoKLS0tIGEvaW5jbHVkZS9saW51 eC9zdGFja2RlcG90LmgKKysrIGIvaW5jbHVkZS9saW51eC9zdGFja2RlcG90LmgKQEAgLTI2LDcg KzI2LDExIEBAIHR5cGVkZWYgdTMyIGRlcG90X3N0YWNrX2hhbmRsZV90Owogc3RydWN0IHN0YWNr X3RyYWNlOwogCiBkZXBvdF9zdGFja19oYW5kbGVfdCBkZXBvdF9zYXZlX3N0YWNrKHN0cnVjdCBz dGFja190cmFjZSAqdHJhY2UsIGdmcF90IGZsYWdzKTsKK2RlcG90X3N0YWNrX2hhbmRsZV90IHN0 YWNrX2RlcG90X3NhdmUodW5zaWduZWQgbG9uZyAqZW50cmllcywKKwkJCQkgICAgICB1bnNpZ25l ZCBpbnQgbnJfZW50cmllcywgZ2ZwX3QgZ2ZwX2ZsYWdzKTsKIAogdm9pZCBkZXBvdF9mZXRjaF9z dGFjayhkZXBvdF9zdGFja19oYW5kbGVfdCBoYW5kbGUsIHN0cnVjdCBzdGFja190cmFjZSAqdHJh Y2UpOwordW5zaWduZWQgaW50IHN0YWNrX2RlcG90X2ZldGNoKGRlcG90X3N0YWNrX2hhbmRsZV90 IGhhbmRsZSwKKwkJCSAgICAgICB1bnNpZ25lZCBsb25nICoqZW50cmllcyk7CiAKICNlbmRpZgot LS0gYS9saWIvc3RhY2tkZXBvdC5jCisrKyBiL2xpYi9zdGFja2RlcG90LmMKQEAgLTE5NCw0MCAr MTk0LDYwIEBAIHN0YXRpYyBpbmxpbmUgc3RydWN0IHN0YWNrX3JlY29yZCAqZmluZF8KIAlyZXR1 cm4gTlVMTDsKIH0KIAotdm9pZCBkZXBvdF9mZXRjaF9zdGFjayhkZXBvdF9zdGFja19oYW5kbGVf dCBoYW5kbGUsIHN0cnVjdCBzdGFja190cmFjZSAqdHJhY2UpCisvKioKKyAqIHN0YWNrX2RlcG90 X2ZldGNoIC0gRmV0Y2ggc3RhY2sgZW50cmllcyBmcm9tIGEgZGVwb3QKKyAqCisgKiBAaGFuZGxl OgkJU3RhY2sgZGVwb3QgaGFuZGxlIHdoaWNoIHdhcyByZXR1cm5lZCBmcm9tCisgKgkJCXN0YWNr X2RlcG90X3NhdmUoKS4KKyAqIEBlbnRyaWVzOgkJUG9pbnRlciB0byBzdG9yZSB0aGUgZW50cmll cyBhZGRyZXNzCisgKgorICogUmV0dXJuOiBUaGUgbnVtYmVyIG9mIHRyYWNlIGVudHJpZXMgZm9y IHRoaXMgZGVwb3QuCisgKi8KK3Vuc2lnbmVkIGludCBzdGFja19kZXBvdF9mZXRjaChkZXBvdF9z dGFja19oYW5kbGVfdCBoYW5kbGUsCisJCQkgICAgICAgdW5zaWduZWQgbG9uZyAqKmVudHJpZXMp CiB7CiAJdW5pb24gaGFuZGxlX3BhcnRzIHBhcnRzID0geyAuaGFuZGxlID0gaGFuZGxlIH07CiAJ dm9pZCAqc2xhYiA9IHN0YWNrX3NsYWJzW3BhcnRzLnNsYWJpbmRleF07CiAJc2l6ZV90IG9mZnNl dCA9IHBhcnRzLm9mZnNldCA8PCBTVEFDS19BTExPQ19BTElHTjsKIAlzdHJ1Y3Qgc3RhY2tfcmVj b3JkICpzdGFjayA9IHNsYWIgKyBvZmZzZXQ7CiAKLQl0cmFjZS0+bnJfZW50cmllcyA9IHRyYWNl LT5tYXhfZW50cmllcyA9IHN0YWNrLT5zaXplOwotCXRyYWNlLT5lbnRyaWVzID0gc3RhY2stPmVu dHJpZXM7Ci0JdHJhY2UtPnNraXAgPSAwOworCSplbnRyaWVzID0gc3RhY2stPmVudHJpZXM7CisJ cmV0dXJuIHN0YWNrLT5zaXplOworfQorRVhQT1JUX1NZTUJPTF9HUEwoc3RhY2tfZGVwb3RfZmV0 Y2gpOworCit2b2lkIGRlcG90X2ZldGNoX3N0YWNrKGRlcG90X3N0YWNrX2hhbmRsZV90IGhhbmRs ZSwgc3RydWN0IHN0YWNrX3RyYWNlICp0cmFjZSkKK3sKKwl1bnNpZ25lZCBpbnQgbmVudCA9IHN0 YWNrX2RlcG90X2ZldGNoKGhhbmRsZSwgJnRyYWNlLT5lbnRyaWVzKTsKKworCXRyYWNlLT5tYXhf ZW50cmllcyA9IHRyYWNlLT5ucl9lbnRyaWVzID0gbmVudDsKIH0KIEVYUE9SVF9TWU1CT0xfR1BM KGRlcG90X2ZldGNoX3N0YWNrKTsKIAogLyoqCi0gKiBkZXBvdF9zYXZlX3N0YWNrIC0gc2F2ZSBz dGFjayBpbiBhIHN0YWNrIGRlcG90LgotICogQHRyYWNlIC0gdGhlIHN0YWNrdHJhY2UgdG8gc2F2 ZS4KLSAqIEBhbGxvY19mbGFncyAtIGZsYWdzIGZvciBhbGxvY2F0aW5nIGFkZGl0aW9uYWwgbWVt b3J5IGlmIHJlcXVpcmVkLgorICogc3RhY2tfZGVwb3Rfc2F2ZSAtIFNhdmUgYSBzdGFjayB0cmFj ZSBmcm9tIGFuIGFycmF5CisgKgorICogQGVudHJpZXM6CQlQb2ludGVyIHRvIHN0b3JhZ2UgYXJy YXkKKyAqIEBucl9lbnRyaWVzOgkJU2l6ZSBvZiB0aGUgc3RvcmFnZSBhcnJheQorICogQGFsbG9j X2ZsYWdzOglBbGxvY2F0aW9uIGdmcCBmbGFncwogICoKLSAqIFJldHVybnMgdGhlIGhhbmRsZSBv ZiB0aGUgc3RhY2sgc3RydWN0IHN0b3JlZCBpbiBkZXBvdC4KKyAqIFJldHVybjogVGhlIGhhbmRs ZSBvZiB0aGUgc3RhY2sgc3RydWN0IHN0b3JlZCBpbiBkZXBvdAogICovCi1kZXBvdF9zdGFja19o YW5kbGVfdCBkZXBvdF9zYXZlX3N0YWNrKHN0cnVjdCBzdGFja190cmFjZSAqdHJhY2UsCi0JCQkJ ICAgIGdmcF90IGFsbG9jX2ZsYWdzKQorZGVwb3Rfc3RhY2tfaGFuZGxlX3Qgc3RhY2tfZGVwb3Rf c2F2ZSh1bnNpZ25lZCBsb25nICplbnRyaWVzLAorCQkJCSAgICAgIHVuc2lnbmVkIGludCBucl9l bnRyaWVzLAorCQkJCSAgICAgIGdmcF90IGFsbG9jX2ZsYWdzKQogewotCXUzMiBoYXNoOwotCWRl cG90X3N0YWNrX2hhbmRsZV90IHJldHZhbCA9IDA7CiAJc3RydWN0IHN0YWNrX3JlY29yZCAqZm91 bmQgPSBOVUxMLCAqKmJ1Y2tldDsKLQl1bnNpZ25lZCBsb25nIGZsYWdzOworCWRlcG90X3N0YWNr X2hhbmRsZV90IHJldHZhbCA9IDA7CiAJc3RydWN0IHBhZ2UgKnBhZ2UgPSBOVUxMOwogCXZvaWQg KnByZWFsbG9jID0gTlVMTDsKKwl1bnNpZ25lZCBsb25nIGZsYWdzOworCXUzMiBoYXNoOwogCi0J aWYgKHVubGlrZWx5KHRyYWNlLT5ucl9lbnRyaWVzID09IDApKQorCWlmICh1bmxpa2VseShucl9l bnRyaWVzID09IDApKQogCQlnb3RvIGZhc3RfZXhpdDsKIAotCWhhc2ggPSBoYXNoX3N0YWNrKHRy YWNlLT5lbnRyaWVzLCB0cmFjZS0+bnJfZW50cmllcyk7CisJaGFzaCA9IGhhc2hfc3RhY2soZW50 cmllcywgbnJfZW50cmllcyk7CiAJYnVja2V0ID0gJnN0YWNrX3RhYmxlW2hhc2ggJiBTVEFDS19I QVNIX01BU0tdOwogCiAJLyoKQEAgLTIzNSw4ICsyNTUsOCBAQCBkZXBvdF9zdGFja19oYW5kbGVf dCBkZXBvdF9zYXZlX3N0YWNrKHN0CiAJICogVGhlIHNtcF9sb2FkX2FjcXVpcmUoKSBoZXJlIHBh aXJzIHdpdGggc21wX3N0b3JlX3JlbGVhc2UoKSB0bwogCSAqIHxidWNrZXR8IGJlbG93LgogCSAq LwotCWZvdW5kID0gZmluZF9zdGFjayhzbXBfbG9hZF9hY3F1aXJlKGJ1Y2tldCksIHRyYWNlLT5l bnRyaWVzLAotCQkJICAgdHJhY2UtPm5yX2VudHJpZXMsIGhhc2gpOworCWZvdW5kID0gZmluZF9z dGFjayhzbXBfbG9hZF9hY3F1aXJlKGJ1Y2tldCksIGVudHJpZXMsCisJCQkgICBucl9lbnRyaWVz LCBoYXNoKTsKIAlpZiAoZm91bmQpCiAJCWdvdG8gZXhpdDsKIApAQCAtMjY0LDEwICsyODQsMTAg QEAgZGVwb3Rfc3RhY2tfaGFuZGxlX3QgZGVwb3Rfc2F2ZV9zdGFjayhzdAogCiAJc3Bpbl9sb2Nr X2lycXNhdmUoJmRlcG90X2xvY2ssIGZsYWdzKTsKIAotCWZvdW5kID0gZmluZF9zdGFjaygqYnVj a2V0LCB0cmFjZS0+ZW50cmllcywgdHJhY2UtPm5yX2VudHJpZXMsIGhhc2gpOworCWZvdW5kID0g ZmluZF9zdGFjaygqYnVja2V0LCBlbnRyaWVzLCBucl9lbnRyaWVzLCBoYXNoKTsKIAlpZiAoIWZv dW5kKSB7CiAJCXN0cnVjdCBzdGFja19yZWNvcmQgKm5ldyA9Ci0JCQlkZXBvdF9hbGxvY19zdGFj ayh0cmFjZS0+ZW50cmllcywgdHJhY2UtPm5yX2VudHJpZXMsCisJCQlkZXBvdF9hbGxvY19zdGFj ayhlbnRyaWVzLCBucl9lbnRyaWVzLAogCQkJCQkgIGhhc2gsICZwcmVhbGxvYywgYWxsb2NfZmxh Z3MpOwogCQlpZiAobmV3KSB7CiAJCQluZXctPm5leHQgPSAqYnVja2V0OwpAQCAtMjk3LDQgKzMx NywxNiBAQCBkZXBvdF9zdGFja19oYW5kbGVfdCBkZXBvdF9zYXZlX3N0YWNrKHN0CiBmYXN0X2V4 aXQ6CiAJcmV0dXJuIHJldHZhbDsKIH0KK0VYUE9SVF9TWU1CT0xfR1BMKHN0YWNrX2RlcG90X3Nh dmUpOworCisvKioKKyAqIGRlcG90X3NhdmVfc3RhY2sgLSBzYXZlIHN0YWNrIGluIGEgc3RhY2sg ZGVwb3QuCisgKiBAdHJhY2UgLSB0aGUgc3RhY2t0cmFjZSB0byBzYXZlLgorICogQGFsbG9jX2Zs YWdzIC0gZmxhZ3MgZm9yIGFsbG9jYXRpbmcgYWRkaXRpb25hbCBtZW1vcnkgaWYgcmVxdWlyZWQu CisgKi8KK2RlcG90X3N0YWNrX2hhbmRsZV90IGRlcG90X3NhdmVfc3RhY2soc3RydWN0IHN0YWNr X3RyYWNlICp0cmFjZSwKKwkJCQkgICAgICBnZnBfdCBhbGxvY19mbGFncykKK3sKKwlyZXR1cm4g c3RhY2tfZGVwb3Rfc2F2ZSh0cmFjZS0+ZW50cmllcywgdHJhY2UtPm5yX2VudHJpZXMsIGFsbG9j X2ZsYWdzKTsKK30KIEVYUE9SVF9TWU1CT0xfR1BMKGRlcG90X3NhdmVfc3RhY2spOwoKCl9fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCkludGVsLWdmeCBtYWls aW5nIGxpc3QKSW50ZWwtZ2Z4QGxpc3RzLmZyZWVkZXNrdG9wLm9yZwpodHRwczovL2xpc3RzLmZy ZWVkZXNrdG9wLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2ludGVsLWdmeA== From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AA20C10F03 for ; Thu, 25 Apr 2019 10:35:08 +0000 (UTC) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1095F2084B for ; Thu, 25 Apr 2019 10:35:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1095F2084B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 12FDF1D40; Thu, 25 Apr 2019 10:34:27 +0000 (UTC) Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 17C7A1D27 for ; Thu, 25 Apr 2019 10:33:53 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from Galois.linutronix.de (Galois.linutronix.de [146.0.238.70]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 7EE6B74A for ; Thu, 25 Apr 2019 10:33:52 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1hJb9m-0001qd-PK; Thu, 25 Apr 2019 11:59:06 +0200 Message-Id: <20190425094801.414574828@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 25 Apr 2019 11:44:56 +0200 From: Thomas Gleixner To: LKML Subject: [patch V3 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays References: <20190425094453.875139013@linutronix.de> MIME-Version: 1.0 Cc: Mike Snitzer , David Airlie , Catalin Marinas , Joonas Lahtinen , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Miroslav Benes , Christoph Hellwig , Alasdair Kergon , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, Johannes Thumshirn , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, David Rientjes , Maarten Lankhorst , Akinobu Mita , Steven Rostedt , Josef Bacik , Rodrigo Vivi , Mike Rapoport , Jani Nikula , Andy Lutomirski , Josh Poimboeuf , David Sterba , Dmitry Vyukov , Tom Zanussi , Chris Mason , Pekka Enberg , iommu@lists.linux-foundation.org, Daniel Vetter , Andrew Morton , Robin Murphy , linux-btrfs@vger.kernel.org X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Sender: iommu-bounces@lists.linux-foundation.org Errors-To: iommu-bounces@lists.linux-foundation.org Message-ID: <20190425094456.kKZuhRcmIODytPTxAJEILOCbOT9VgcYSmQMrYl1RY9I@z> The struct stack_trace indirection in the stack depot functions is a truly pointless excercise which requires horrible code at the callsites. Provide interfaces based on plain storage arrays. Signed-off-by: Thomas Gleixner Acked-by: Alexander Potapenko --- V3: Fix kernel-doc --- include/linux/stackdepot.h | 4 ++ lib/stackdepot.c | 70 ++++++++++++++++++++++++++++++++------------- 2 files changed, 55 insertions(+), 19 deletions(-) --- a/include/linux/stackdepot.h +++ b/include/linux/stackdepot.h @@ -26,7 +26,11 @@ typedef u32 depot_stack_handle_t; struct stack_trace; depot_stack_handle_t depot_save_stack(struct stack_trace *trace, gfp_t flags); +depot_stack_handle_t stack_depot_save(unsigned long *entries, + unsigned int nr_entries, gfp_t gfp_flags); void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace); +unsigned int stack_depot_fetch(depot_stack_handle_t handle, + unsigned long **entries); #endif --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -194,40 +194,60 @@ static inline struct stack_record *find_ return NULL; } -void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace) +/** + * stack_depot_fetch - Fetch stack entries from a depot + * + * @handle: Stack depot handle which was returned from + * stack_depot_save(). + * @entries: Pointer to store the entries address + * + * Return: The number of trace entries for this depot. + */ +unsigned int stack_depot_fetch(depot_stack_handle_t handle, + unsigned long **entries) { union handle_parts parts = { .handle = handle }; void *slab = stack_slabs[parts.slabindex]; size_t offset = parts.offset << STACK_ALLOC_ALIGN; struct stack_record *stack = slab + offset; - trace->nr_entries = trace->max_entries = stack->size; - trace->entries = stack->entries; - trace->skip = 0; + *entries = stack->entries; + return stack->size; +} +EXPORT_SYMBOL_GPL(stack_depot_fetch); + +void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace) +{ + unsigned int nent = stack_depot_fetch(handle, &trace->entries); + + trace->max_entries = trace->nr_entries = nent; } EXPORT_SYMBOL_GPL(depot_fetch_stack); /** - * depot_save_stack - save stack in a stack depot. - * @trace - the stacktrace to save. - * @alloc_flags - flags for allocating additional memory if required. + * stack_depot_save - Save a stack trace from an array + * + * @entries: Pointer to storage array + * @nr_entries: Size of the storage array + * @alloc_flags: Allocation gfp flags * - * Returns the handle of the stack struct stored in depot. + * Return: The handle of the stack struct stored in depot */ -depot_stack_handle_t depot_save_stack(struct stack_trace *trace, - gfp_t alloc_flags) +depot_stack_handle_t stack_depot_save(unsigned long *entries, + unsigned int nr_entries, + gfp_t alloc_flags) { - u32 hash; - depot_stack_handle_t retval = 0; struct stack_record *found = NULL, **bucket; - unsigned long flags; + depot_stack_handle_t retval = 0; struct page *page = NULL; void *prealloc = NULL; + unsigned long flags; + u32 hash; - if (unlikely(trace->nr_entries == 0)) + if (unlikely(nr_entries == 0)) goto fast_exit; - hash = hash_stack(trace->entries, trace->nr_entries); + hash = hash_stack(entries, nr_entries); bucket = &stack_table[hash & STACK_HASH_MASK]; /* @@ -235,8 +255,8 @@ depot_stack_handle_t depot_save_stack(st * The smp_load_acquire() here pairs with smp_store_release() to * |bucket| below. */ - found = find_stack(smp_load_acquire(bucket), trace->entries, - trace->nr_entries, hash); + found = find_stack(smp_load_acquire(bucket), entries, + nr_entries, hash); if (found) goto exit; @@ -264,10 +284,10 @@ depot_stack_handle_t depot_save_stack(st spin_lock_irqsave(&depot_lock, flags); - found = find_stack(*bucket, trace->entries, trace->nr_entries, hash); + found = find_stack(*bucket, entries, nr_entries, hash); if (!found) { struct stack_record *new = - depot_alloc_stack(trace->entries, trace->nr_entries, + depot_alloc_stack(entries, nr_entries, hash, &prealloc, alloc_flags); if (new) { new->next = *bucket; @@ -297,4 +317,16 @@ depot_stack_handle_t depot_save_stack(st fast_exit: return retval; } +EXPORT_SYMBOL_GPL(stack_depot_save); + +/** + * depot_save_stack - save stack in a stack depot. + * @trace - the stacktrace to save. + * @alloc_flags - flags for allocating additional memory if required. + */ +depot_stack_handle_t depot_save_stack(struct stack_trace *trace, + gfp_t alloc_flags) +{ + return stack_depot_save(trace->entries, trace->nr_entries, alloc_flags); +} EXPORT_SYMBOL_GPL(depot_save_stack); _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu