From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CAD1C282E3 for ; Thu, 25 Apr 2019 09:59:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3E8B021903 for ; Thu, 25 Apr 2019 09:59:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729271AbfDYJ7t (ORCPT ); Thu, 25 Apr 2019 05:59:49 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:57722 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727579AbfDYJ7t (ORCPT ); Thu, 25 Apr 2019 05:59:49 -0400 Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1hJb9w-0001rd-FT; Thu, 25 Apr 2019 11:59:16 +0200 Message-Id: <20190425094802.067210525@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 25 Apr 2019 11:45:03 +0200 From: Thomas Gleixner To: LKML Cc: Josh Poimboeuf , x86@kernel.org, Andy Lutomirski , linux-mm@kvack.org, Mike Rapoport , David Rientjes , Andrew Morton , Steven Rostedt , Alexander Potapenko , Alexey Dobriyan , Christoph Lameter , Pekka Enberg , Catalin Marinas , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Akinobu Mita , Christoph Hellwig , iommu@lists.linux-foundation.org, Robin Murphy , Marek Szyprowski , Johannes Thumshirn , David Sterba , Chris Mason , Josef Bacik , linux-btrfs@vger.kernel.org, dm-devel@redhat.com, Mike Snitzer , Alasdair Kergon , Daniel Vetter , intel-gfx@lists.freedesktop.org, Joonas Lahtinen , Maarten Lankhorst , dri-devel@lists.freedesktop.org, David Airlie , Jani Nikula , Rodrigo Vivi , Tom Zanussi , Miroslav Benes , linux-arch@vger.kernel.org Subject: [patch V3 10/29] mm/page_owner: Simplify stack trace handling References: <20190425094453.875139013@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Replace the indirection through struct stack_trace by using the storage array based interfaces. The original code in all printing functions is really wrong. It allocates a storage array on stack which is unused because depot_fetch_stack() does not store anything in it. It overwrites the entries pointer in the stack_trace struct so it points to the depot storage. Signed-off-by: Thomas Gleixner Cc: linux-mm@kvack.org Cc: Mike Rapoport Cc: David Rientjes Cc: Andrew Morton --- mm/page_owner.c | 79 +++++++++++++++++++------------------------------------- 1 file changed, 28 insertions(+), 51 deletions(-) --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -58,15 +58,10 @@ static bool need_page_owner(void) static __always_inline depot_stack_handle_t create_dummy_stack(void) { unsigned long entries[4]; - struct stack_trace dummy; + unsigned int nr_entries; - dummy.nr_entries = 0; - dummy.max_entries = ARRAY_SIZE(entries); - dummy.entries = &entries[0]; - dummy.skip = 0; - - save_stack_trace(&dummy); - return depot_save_stack(&dummy, GFP_KERNEL); + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); + return stack_depot_save(entries, nr_entries, GFP_KERNEL); } static noinline void register_dummy_stack(void) @@ -120,46 +115,39 @@ void __reset_page_owner(struct page *pag } } -static inline bool check_recursive_alloc(struct stack_trace *trace, - unsigned long ip) +static inline bool check_recursive_alloc(unsigned long *entries, + unsigned int nr_entries, + unsigned long ip) { - int i; + unsigned int i; - if (!trace->nr_entries) - return false; - - for (i = 0; i < trace->nr_entries; i++) { - if (trace->entries[i] == ip) + for (i = 0; i < nr_entries; i++) { + if (entries[i] == ip) return true; } - return false; } static noinline depot_stack_handle_t save_stack(gfp_t flags) { unsigned long entries[PAGE_OWNER_STACK_DEPTH]; - struct stack_trace trace = { - .nr_entries = 0, - .entries = entries, - .max_entries = PAGE_OWNER_STACK_DEPTH, - .skip = 2 - }; depot_stack_handle_t handle; + unsigned int nr_entries; - save_stack_trace(&trace); + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2); /* - * We need to check recursion here because our request to stackdepot - * could trigger memory allocation to save new entry. New memory - * allocation would reach here and call depot_save_stack() again - * if we don't catch it. There is still not enough memory in stackdepot - * so it would try to allocate memory again and loop forever. + * We need to check recursion here because our request to + * stackdepot could trigger memory allocation to save new + * entry. New memory allocation would reach here and call + * stack_depot_save_entries() again if we don't catch it. There is + * still not enough memory in stackdepot so it would try to + * allocate memory again and loop forever. */ - if (check_recursive_alloc(&trace, _RET_IP_)) + if (check_recursive_alloc(entries, nr_entries, _RET_IP_)) return dummy_handle; - handle = depot_save_stack(&trace, flags); + handle = stack_depot_save(entries, nr_entries, flags); if (!handle) handle = failure_handle; @@ -337,16 +325,10 @@ print_page_owner(char __user *buf, size_ struct page *page, struct page_owner *page_owner, depot_stack_handle_t handle) { - int ret; - int pageblock_mt, page_mt; + int ret, pageblock_mt, page_mt; + unsigned long *entries; + unsigned int nr_entries; char *kbuf; - unsigned long entries[PAGE_OWNER_STACK_DEPTH]; - struct stack_trace trace = { - .nr_entries = 0, - .entries = entries, - .max_entries = PAGE_OWNER_STACK_DEPTH, - .skip = 0 - }; count = min_t(size_t, count, PAGE_SIZE); kbuf = kmalloc(count, GFP_KERNEL); @@ -375,8 +357,8 @@ print_page_owner(char __user *buf, size_ if (ret >= count) goto err; - depot_fetch_stack(handle, &trace); - ret += snprint_stack_trace(kbuf + ret, count - ret, &trace, 0); + nr_entries = stack_depot_fetch(handle, &entries); + ret += stack_trace_snprint(kbuf + ret, count - ret, entries, nr_entries, 0); if (ret >= count) goto err; @@ -407,14 +389,9 @@ void __dump_page_owner(struct page *page { struct page_ext *page_ext = lookup_page_ext(page); struct page_owner *page_owner; - unsigned long entries[PAGE_OWNER_STACK_DEPTH]; - struct stack_trace trace = { - .nr_entries = 0, - .entries = entries, - .max_entries = PAGE_OWNER_STACK_DEPTH, - .skip = 0 - }; depot_stack_handle_t handle; + unsigned long *entries; + unsigned int nr_entries; gfp_t gfp_mask; int mt; @@ -438,10 +415,10 @@ void __dump_page_owner(struct page *page return; } - depot_fetch_stack(handle, &trace); + nr_entries = stack_depot_fetch(handle, &entries); pr_alert("page allocated via order %u, migratetype %s, gfp_mask %#x(%pGg)\n", page_owner->order, migratetype_names[mt], gfp_mask, &gfp_mask); - print_stack_trace(&trace, 0); + stack_trace_print(entries, nr_entries, 0); if (page_owner->last_migrate_reason != -1) pr_alert("page has been migrated, last migrate reason: %s\n", From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Gleixner Subject: [patch V3 10/29] mm/page_owner: Simplify stack trace handling Date: Thu, 25 Apr 2019 11:45:03 +0200 Message-ID: <20190425094802.067210525@linutronix.de> References: <20190425094453.875139013@linutronix.de> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" To: LKML Cc: Mike Snitzer , David Airlie , Catalin Marinas , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Miroslav Benes , Christoph Hellwig , Alasdair Kergon , Marek Szyprowski , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, Johannes Thumshirn , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, David Rientjes , Akinobu Mita , Steven Rostedt , Josef Bacik , Mike Rapoport , Andy Lutomirski , Josh Poimboeuf List-Id: linux-arch.vger.kernel.org UmVwbGFjZSB0aGUgaW5kaXJlY3Rpb24gdGhyb3VnaCBzdHJ1Y3Qgc3RhY2tfdHJhY2UgYnkgdXNp bmcgdGhlIHN0b3JhZ2UKYXJyYXkgYmFzZWQgaW50ZXJmYWNlcy4KClRoZSBvcmlnaW5hbCBjb2Rl IGluIGFsbCBwcmludGluZyBmdW5jdGlvbnMgaXMgcmVhbGx5IHdyb25nLiBJdCBhbGxvY2F0ZXMg YQpzdG9yYWdlIGFycmF5IG9uIHN0YWNrIHdoaWNoIGlzIHVudXNlZCBiZWNhdXNlIGRlcG90X2Zl dGNoX3N0YWNrKCkgZG9lcyBub3QKc3RvcmUgYW55dGhpbmcgaW4gaXQuIEl0IG92ZXJ3cml0ZXMg dGhlIGVudHJpZXMgcG9pbnRlciBpbiB0aGUgc3RhY2tfdHJhY2UKc3RydWN0IHNvIGl0IHBvaW50 cyB0byB0aGUgZGVwb3Qgc3RvcmFnZS4KClNpZ25lZC1vZmYtYnk6IFRob21hcyBHbGVpeG5lciA8 dGdseEBsaW51dHJvbml4LmRlPgpDYzogbGludXgtbW1Aa3ZhY2sub3JnCkNjOiBNaWtlIFJhcG9w b3J0IDxycHB0QGxpbnV4LnZuZXQuaWJtLmNvbT4KQ2M6IERhdmlkIFJpZW50amVzIDxyaWVudGpl c0Bnb29nbGUuY29tPgpDYzogQW5kcmV3IE1vcnRvbiA8YWtwbUBsaW51eC1mb3VuZGF0aW9uLm9y Zz4KLS0tCiBtbS9wYWdlX293bmVyLmMgfCAgIDc5ICsrKysrKysrKysrKysrKysrKystLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAxIGZpbGUgY2hhbmdlZCwgMjggaW5zZXJ0 aW9ucygrKSwgNTEgZGVsZXRpb25zKC0pCgotLS0gYS9tbS9wYWdlX293bmVyLmMKKysrIGIvbW0v cGFnZV9vd25lci5jCkBAIC01OCwxNSArNTgsMTAgQEAgc3RhdGljIGJvb2wgbmVlZF9wYWdlX293 bmVyKHZvaWQpCiBzdGF0aWMgX19hbHdheXNfaW5saW5lIGRlcG90X3N0YWNrX2hhbmRsZV90IGNy ZWF0ZV9kdW1teV9zdGFjayh2b2lkKQogewogCXVuc2lnbmVkIGxvbmcgZW50cmllc1s0XTsKLQlz dHJ1Y3Qgc3RhY2tfdHJhY2UgZHVtbXk7CisJdW5zaWduZWQgaW50IG5yX2VudHJpZXM7CiAKLQlk dW1teS5ucl9lbnRyaWVzID0gMDsKLQlkdW1teS5tYXhfZW50cmllcyA9IEFSUkFZX1NJWkUoZW50 cmllcyk7Ci0JZHVtbXkuZW50cmllcyA9ICZlbnRyaWVzWzBdOwotCWR1bW15LnNraXAgPSAwOwot Ci0Jc2F2ZV9zdGFja190cmFjZSgmZHVtbXkpOwotCXJldHVybiBkZXBvdF9zYXZlX3N0YWNrKCZk dW1teSwgR0ZQX0tFUk5FTCk7CisJbnJfZW50cmllcyA9IHN0YWNrX3RyYWNlX3NhdmUoZW50cmll cywgQVJSQVlfU0laRShlbnRyaWVzKSwgMCk7CisJcmV0dXJuIHN0YWNrX2RlcG90X3NhdmUoZW50 cmllcywgbnJfZW50cmllcywgR0ZQX0tFUk5FTCk7CiB9CiAKIHN0YXRpYyBub2lubGluZSB2b2lk IHJlZ2lzdGVyX2R1bW15X3N0YWNrKHZvaWQpCkBAIC0xMjAsNDYgKzExNSwzOSBAQCB2b2lkIF9f cmVzZXRfcGFnZV9vd25lcihzdHJ1Y3QgcGFnZSAqcGFnCiAJfQogfQogCi1zdGF0aWMgaW5saW5l IGJvb2wgY2hlY2tfcmVjdXJzaXZlX2FsbG9jKHN0cnVjdCBzdGFja190cmFjZSAqdHJhY2UsCi0J CQkJCXVuc2lnbmVkIGxvbmcgaXApCitzdGF0aWMgaW5saW5lIGJvb2wgY2hlY2tfcmVjdXJzaXZl X2FsbG9jKHVuc2lnbmVkIGxvbmcgKmVudHJpZXMsCisJCQkJCSB1bnNpZ25lZCBpbnQgbnJfZW50 cmllcywKKwkJCQkJIHVuc2lnbmVkIGxvbmcgaXApCiB7Ci0JaW50IGk7CisJdW5zaWduZWQgaW50 IGk7CiAKLQlpZiAoIXRyYWNlLT5ucl9lbnRyaWVzKQotCQlyZXR1cm4gZmFsc2U7Ci0KLQlmb3Ig KGkgPSAwOyBpIDwgdHJhY2UtPm5yX2VudHJpZXM7IGkrKykgewotCQlpZiAodHJhY2UtPmVudHJp ZXNbaV0gPT0gaXApCisJZm9yIChpID0gMDsgaSA8IG5yX2VudHJpZXM7IGkrKykgeworCQlpZiAo ZW50cmllc1tpXSA9PSBpcCkKIAkJCXJldHVybiB0cnVlOwogCX0KLQogCXJldHVybiBmYWxzZTsK IH0KIAogc3RhdGljIG5vaW5saW5lIGRlcG90X3N0YWNrX2hhbmRsZV90IHNhdmVfc3RhY2soZ2Zw X3QgZmxhZ3MpCiB7CiAJdW5zaWduZWQgbG9uZyBlbnRyaWVzW1BBR0VfT1dORVJfU1RBQ0tfREVQ VEhdOwotCXN0cnVjdCBzdGFja190cmFjZSB0cmFjZSA9IHsKLQkJLm5yX2VudHJpZXMgPSAwLAot CQkuZW50cmllcyA9IGVudHJpZXMsCi0JCS5tYXhfZW50cmllcyA9IFBBR0VfT1dORVJfU1RBQ0tf REVQVEgsCi0JCS5za2lwID0gMgotCX07CiAJZGVwb3Rfc3RhY2tfaGFuZGxlX3QgaGFuZGxlOwor CXVuc2lnbmVkIGludCBucl9lbnRyaWVzOwogCi0Jc2F2ZV9zdGFja190cmFjZSgmdHJhY2UpOwor CW5yX2VudHJpZXMgPSBzdGFja190cmFjZV9zYXZlKGVudHJpZXMsIEFSUkFZX1NJWkUoZW50cmll cyksIDIpOwogCiAJLyoKLQkgKiBXZSBuZWVkIHRvIGNoZWNrIHJlY3Vyc2lvbiBoZXJlIGJlY2F1 c2Ugb3VyIHJlcXVlc3QgdG8gc3RhY2tkZXBvdAotCSAqIGNvdWxkIHRyaWdnZXIgbWVtb3J5IGFs bG9jYXRpb24gdG8gc2F2ZSBuZXcgZW50cnkuIE5ldyBtZW1vcnkKLQkgKiBhbGxvY2F0aW9uIHdv dWxkIHJlYWNoIGhlcmUgYW5kIGNhbGwgZGVwb3Rfc2F2ZV9zdGFjaygpIGFnYWluCi0JICogaWYg d2UgZG9uJ3QgY2F0Y2ggaXQuIFRoZXJlIGlzIHN0aWxsIG5vdCBlbm91Z2ggbWVtb3J5IGluIHN0 YWNrZGVwb3QKLQkgKiBzbyBpdCB3b3VsZCB0cnkgdG8gYWxsb2NhdGUgbWVtb3J5IGFnYWluIGFu ZCBsb29wIGZvcmV2ZXIuCisJICogV2UgbmVlZCB0byBjaGVjayByZWN1cnNpb24gaGVyZSBiZWNh dXNlIG91ciByZXF1ZXN0IHRvCisJICogc3RhY2tkZXBvdCBjb3VsZCB0cmlnZ2VyIG1lbW9yeSBh bGxvY2F0aW9uIHRvIHNhdmUgbmV3CisJICogZW50cnkuIE5ldyBtZW1vcnkgYWxsb2NhdGlvbiB3 b3VsZCByZWFjaCBoZXJlIGFuZCBjYWxsCisJICogc3RhY2tfZGVwb3Rfc2F2ZV9lbnRyaWVzKCkg YWdhaW4gaWYgd2UgZG9uJ3QgY2F0Y2ggaXQuIFRoZXJlIGlzCisJICogc3RpbGwgbm90IGVub3Vn aCBtZW1vcnkgaW4gc3RhY2tkZXBvdCBzbyBpdCB3b3VsZCB0cnkgdG8KKwkgKiBhbGxvY2F0ZSBt ZW1vcnkgYWdhaW4gYW5kIGxvb3AgZm9yZXZlci4KIAkgKi8KLQlpZiAoY2hlY2tfcmVjdXJzaXZl X2FsbG9jKCZ0cmFjZSwgX1JFVF9JUF8pKQorCWlmIChjaGVja19yZWN1cnNpdmVfYWxsb2MoZW50 cmllcywgbnJfZW50cmllcywgX1JFVF9JUF8pKQogCQlyZXR1cm4gZHVtbXlfaGFuZGxlOwogCi0J aGFuZGxlID0gZGVwb3Rfc2F2ZV9zdGFjaygmdHJhY2UsIGZsYWdzKTsKKwloYW5kbGUgPSBzdGFj a19kZXBvdF9zYXZlKGVudHJpZXMsIG5yX2VudHJpZXMsIGZsYWdzKTsKIAlpZiAoIWhhbmRsZSkK IAkJaGFuZGxlID0gZmFpbHVyZV9oYW5kbGU7CiAKQEAgLTMzNywxNiArMzI1LDEwIEBAIHByaW50 X3BhZ2Vfb3duZXIoY2hhciBfX3VzZXIgKmJ1Ziwgc2l6ZV8KIAkJc3RydWN0IHBhZ2UgKnBhZ2Us IHN0cnVjdCBwYWdlX293bmVyICpwYWdlX293bmVyLAogCQlkZXBvdF9zdGFja19oYW5kbGVfdCBo YW5kbGUpCiB7Ci0JaW50IHJldDsKLQlpbnQgcGFnZWJsb2NrX210LCBwYWdlX210OworCWludCBy ZXQsIHBhZ2VibG9ja19tdCwgcGFnZV9tdDsKKwl1bnNpZ25lZCBsb25nICplbnRyaWVzOworCXVu c2lnbmVkIGludCBucl9lbnRyaWVzOwogCWNoYXIgKmtidWY7Ci0JdW5zaWduZWQgbG9uZyBlbnRy aWVzW1BBR0VfT1dORVJfU1RBQ0tfREVQVEhdOwotCXN0cnVjdCBzdGFja190cmFjZSB0cmFjZSA9 IHsKLQkJLm5yX2VudHJpZXMgPSAwLAotCQkuZW50cmllcyA9IGVudHJpZXMsCi0JCS5tYXhfZW50 cmllcyA9IFBBR0VfT1dORVJfU1RBQ0tfREVQVEgsCi0JCS5za2lwID0gMAotCX07CiAKIAljb3Vu dCA9IG1pbl90KHNpemVfdCwgY291bnQsIFBBR0VfU0laRSk7CiAJa2J1ZiA9IGttYWxsb2MoY291 bnQsIEdGUF9LRVJORUwpOwpAQCAtMzc1LDggKzM1Nyw4IEBAIHByaW50X3BhZ2Vfb3duZXIoY2hh ciBfX3VzZXIgKmJ1Ziwgc2l6ZV8KIAlpZiAocmV0ID49IGNvdW50KQogCQlnb3RvIGVycjsKIAot CWRlcG90X2ZldGNoX3N0YWNrKGhhbmRsZSwgJnRyYWNlKTsKLQlyZXQgKz0gc25wcmludF9zdGFj a190cmFjZShrYnVmICsgcmV0LCBjb3VudCAtIHJldCwgJnRyYWNlLCAwKTsKKwlucl9lbnRyaWVz ID0gc3RhY2tfZGVwb3RfZmV0Y2goaGFuZGxlLCAmZW50cmllcyk7CisJcmV0ICs9IHN0YWNrX3Ry YWNlX3NucHJpbnQoa2J1ZiArIHJldCwgY291bnQgLSByZXQsIGVudHJpZXMsIG5yX2VudHJpZXMs IDApOwogCWlmIChyZXQgPj0gY291bnQpCiAJCWdvdG8gZXJyOwogCkBAIC00MDcsMTQgKzM4OSw5 IEBAIHZvaWQgX19kdW1wX3BhZ2Vfb3duZXIoc3RydWN0IHBhZ2UgKnBhZ2UKIHsKIAlzdHJ1Y3Qg cGFnZV9leHQgKnBhZ2VfZXh0ID0gbG9va3VwX3BhZ2VfZXh0KHBhZ2UpOwogCXN0cnVjdCBwYWdl X293bmVyICpwYWdlX293bmVyOwotCXVuc2lnbmVkIGxvbmcgZW50cmllc1tQQUdFX09XTkVSX1NU QUNLX0RFUFRIXTsKLQlzdHJ1Y3Qgc3RhY2tfdHJhY2UgdHJhY2UgPSB7Ci0JCS5ucl9lbnRyaWVz ID0gMCwKLQkJLmVudHJpZXMgPSBlbnRyaWVzLAotCQkubWF4X2VudHJpZXMgPSBQQUdFX09XTkVS X1NUQUNLX0RFUFRILAotCQkuc2tpcCA9IDAKLQl9OwogCWRlcG90X3N0YWNrX2hhbmRsZV90IGhh bmRsZTsKKwl1bnNpZ25lZCBsb25nICplbnRyaWVzOworCXVuc2lnbmVkIGludCBucl9lbnRyaWVz OwogCWdmcF90IGdmcF9tYXNrOwogCWludCBtdDsKIApAQCAtNDM4LDEwICs0MTUsMTAgQEAgdm9p ZCBfX2R1bXBfcGFnZV9vd25lcihzdHJ1Y3QgcGFnZSAqcGFnZQogCQlyZXR1cm47CiAJfQogCi0J ZGVwb3RfZmV0Y2hfc3RhY2soaGFuZGxlLCAmdHJhY2UpOworCW5yX2VudHJpZXMgPSBzdGFja19k ZXBvdF9mZXRjaChoYW5kbGUsICZlbnRyaWVzKTsKIAlwcl9hbGVydCgicGFnZSBhbGxvY2F0ZWQg dmlhIG9yZGVyICV1LCBtaWdyYXRldHlwZSAlcywgZ2ZwX21hc2sgJSN4KCVwR2cpXG4iLAogCQkg cGFnZV9vd25lci0+b3JkZXIsIG1pZ3JhdGV0eXBlX25hbWVzW210XSwgZ2ZwX21hc2ssICZnZnBf bWFzayk7Ci0JcHJpbnRfc3RhY2tfdHJhY2UoJnRyYWNlLCAwKTsKKwlzdGFja190cmFjZV9wcmlu dChlbnRyaWVzLCBucl9lbnRyaWVzLCAwKTsKIAogCWlmIChwYWdlX293bmVyLT5sYXN0X21pZ3Jh dGVfcmVhc29uICE9IC0xKQogCQlwcl9hbGVydCgicGFnZSBoYXMgYmVlbiBtaWdyYXRlZCwgbGFz dCBtaWdyYXRlIHJlYXNvbjogJXNcbiIsCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX18KSW50ZWwtZ2Z4IG1haWxpbmcgbGlzdApJbnRlbC1nZnhAbGlzdHMu ZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJlZWRlc2t0b3Aub3JnL21haWxtYW4vbGlz dGluZm8vaW50ZWwtZ2Z4 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56DB6C10F03 for ; Thu, 25 Apr 2019 10:00:22 +0000 (UTC) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2FC9B206BA for ; Thu, 25 Apr 2019 10:00:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2FC9B206BA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id EBE101C25; Thu, 25 Apr 2019 10:00:08 +0000 (UTC) Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 4407A1C16 for ; Thu, 25 Apr 2019 09:59:29 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from Galois.linutronix.de (Galois.linutronix.de [146.0.238.70]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 92F9574A for ; Thu, 25 Apr 2019 09:59:28 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1hJb9w-0001rd-FT; Thu, 25 Apr 2019 11:59:16 +0200 Message-Id: <20190425094802.067210525@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 25 Apr 2019 11:45:03 +0200 From: Thomas Gleixner To: LKML Subject: [patch V3 10/29] mm/page_owner: Simplify stack trace handling References: <20190425094453.875139013@linutronix.de> MIME-Version: 1.0 Cc: Mike Snitzer , David Airlie , Catalin Marinas , Joonas Lahtinen , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Miroslav Benes , Christoph Hellwig , Alasdair Kergon , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, Johannes Thumshirn , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, David Rientjes , Maarten Lankhorst , Akinobu Mita , Steven Rostedt , Josef Bacik , Rodrigo Vivi , Mike Rapoport , Jani Nikula , Andy Lutomirski , Josh Poimboeuf , David Sterba , Dmitry Vyukov , Tom Zanussi , Chris Mason , Pekka Enberg , iommu@lists.linux-foundation.org, Daniel Vetter , Andrew Morton , Robin Murphy , linux-btrfs@vger.kernel.org X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Sender: iommu-bounces@lists.linux-foundation.org Errors-To: iommu-bounces@lists.linux-foundation.org Message-ID: <20190425094503._80nqebAQw3GyVeOdUrKV-vaTvwh3vpGy9_VMp1Tn0M@z> Replace the indirection through struct stack_trace by using the storage array based interfaces. The original code in all printing functions is really wrong. It allocates a storage array on stack which is unused because depot_fetch_stack() does not store anything in it. It overwrites the entries pointer in the stack_trace struct so it points to the depot storage. Signed-off-by: Thomas Gleixner Cc: linux-mm@kvack.org Cc: Mike Rapoport Cc: David Rientjes Cc: Andrew Morton --- mm/page_owner.c | 79 +++++++++++++++++++------------------------------------- 1 file changed, 28 insertions(+), 51 deletions(-) --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -58,15 +58,10 @@ static bool need_page_owner(void) static __always_inline depot_stack_handle_t create_dummy_stack(void) { unsigned long entries[4]; - struct stack_trace dummy; + unsigned int nr_entries; - dummy.nr_entries = 0; - dummy.max_entries = ARRAY_SIZE(entries); - dummy.entries = &entries[0]; - dummy.skip = 0; - - save_stack_trace(&dummy); - return depot_save_stack(&dummy, GFP_KERNEL); + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); + return stack_depot_save(entries, nr_entries, GFP_KERNEL); } static noinline void register_dummy_stack(void) @@ -120,46 +115,39 @@ void __reset_page_owner(struct page *pag } } -static inline bool check_recursive_alloc(struct stack_trace *trace, - unsigned long ip) +static inline bool check_recursive_alloc(unsigned long *entries, + unsigned int nr_entries, + unsigned long ip) { - int i; + unsigned int i; - if (!trace->nr_entries) - return false; - - for (i = 0; i < trace->nr_entries; i++) { - if (trace->entries[i] == ip) + for (i = 0; i < nr_entries; i++) { + if (entries[i] == ip) return true; } - return false; } static noinline depot_stack_handle_t save_stack(gfp_t flags) { unsigned long entries[PAGE_OWNER_STACK_DEPTH]; - struct stack_trace trace = { - .nr_entries = 0, - .entries = entries, - .max_entries = PAGE_OWNER_STACK_DEPTH, - .skip = 2 - }; depot_stack_handle_t handle; + unsigned int nr_entries; - save_stack_trace(&trace); + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2); /* - * We need to check recursion here because our request to stackdepot - * could trigger memory allocation to save new entry. New memory - * allocation would reach here and call depot_save_stack() again - * if we don't catch it. There is still not enough memory in stackdepot - * so it would try to allocate memory again and loop forever. + * We need to check recursion here because our request to + * stackdepot could trigger memory allocation to save new + * entry. New memory allocation would reach here and call + * stack_depot_save_entries() again if we don't catch it. There is + * still not enough memory in stackdepot so it would try to + * allocate memory again and loop forever. */ - if (check_recursive_alloc(&trace, _RET_IP_)) + if (check_recursive_alloc(entries, nr_entries, _RET_IP_)) return dummy_handle; - handle = depot_save_stack(&trace, flags); + handle = stack_depot_save(entries, nr_entries, flags); if (!handle) handle = failure_handle; @@ -337,16 +325,10 @@ print_page_owner(char __user *buf, size_ struct page *page, struct page_owner *page_owner, depot_stack_handle_t handle) { - int ret; - int pageblock_mt, page_mt; + int ret, pageblock_mt, page_mt; + unsigned long *entries; + unsigned int nr_entries; char *kbuf; - unsigned long entries[PAGE_OWNER_STACK_DEPTH]; - struct stack_trace trace = { - .nr_entries = 0, - .entries = entries, - .max_entries = PAGE_OWNER_STACK_DEPTH, - .skip = 0 - }; count = min_t(size_t, count, PAGE_SIZE); kbuf = kmalloc(count, GFP_KERNEL); @@ -375,8 +357,8 @@ print_page_owner(char __user *buf, size_ if (ret >= count) goto err; - depot_fetch_stack(handle, &trace); - ret += snprint_stack_trace(kbuf + ret, count - ret, &trace, 0); + nr_entries = stack_depot_fetch(handle, &entries); + ret += stack_trace_snprint(kbuf + ret, count - ret, entries, nr_entries, 0); if (ret >= count) goto err; @@ -407,14 +389,9 @@ void __dump_page_owner(struct page *page { struct page_ext *page_ext = lookup_page_ext(page); struct page_owner *page_owner; - unsigned long entries[PAGE_OWNER_STACK_DEPTH]; - struct stack_trace trace = { - .nr_entries = 0, - .entries = entries, - .max_entries = PAGE_OWNER_STACK_DEPTH, - .skip = 0 - }; depot_stack_handle_t handle; + unsigned long *entries; + unsigned int nr_entries; gfp_t gfp_mask; int mt; @@ -438,10 +415,10 @@ void __dump_page_owner(struct page *page return; } - depot_fetch_stack(handle, &trace); + nr_entries = stack_depot_fetch(handle, &entries); pr_alert("page allocated via order %u, migratetype %s, gfp_mask %#x(%pGg)\n", page_owner->order, migratetype_names[mt], gfp_mask, &gfp_mask); - print_stack_trace(&trace, 0); + stack_trace_print(entries, nr_entries, 0); if (page_owner->last_migrate_reason != -1) pr_alert("page has been migrated, last migrate reason: %s\n", _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu