From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CE15C433F5 for ; Tue, 30 Nov 2021 12:04:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241207AbhK3MHh (ORCPT ); Tue, 30 Nov 2021 07:07:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229538AbhK3MHf (ORCPT ); Tue, 30 Nov 2021 07:07:35 -0500 Received: from mail-qk1-x735.google.com (mail-qk1-x735.google.com [IPv6:2607:f8b0:4864:20::735]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E7F6C061574 for ; Tue, 30 Nov 2021 04:04:16 -0800 (PST) Received: by mail-qk1-x735.google.com with SMTP id i9so26445618qki.3 for ; Tue, 30 Nov 2021 04:04:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=0TDfEMK/KsWzDzMYVf16B8nf+1SscGhMhouVQsRYobA=; b=PyDcfZpXtYt3oGURyS+5a12jt0Y5bvbUlNrbCZz1w0WLKpTTZwftgbaHiJAsW0HSV3 lyURilx6RGKAAJfAPihxYa/9smThOFBYpttJdjv6/b/H/LHOfwcmpyDu47CI0ZvJtWjm Ok6By6cZC7JyccNda3BE+8h6F/VSvhttvpNrI9kz+KWq4D5cJ/BUx+NGSjJvbVcY7lr6 hDhzVfe2hER7ylpQrb7aJj4lq/RNWSqjQrGeiQDm3PTnPWSoX1xlpJzShhPjk8BuW+JN 6+DfjZ+xO+ZouKrFOppCMoTysuuE8ZFjZyOvgUsDqkOoOdhUPcBMl9W+wLP0ktUtfZ3o jfQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=0TDfEMK/KsWzDzMYVf16B8nf+1SscGhMhouVQsRYobA=; b=nxcMB2BFbLaa0kWJl3hGkhAUJNTLTyaB9QRQxu9f6ELh86BAAwNBMyse8ATWtPyxrS UVdcQQc31rM1e3mPTUE42uYLPCeZBL2rjsP9Dv0uZcXYMva7gHF9R+ktKoAjzEJ9k8+4 0ghcUcOR7EqaIMaz8+Fk+dQbYtUezL5IgLWuB/TgiOrPB+u4Xwu3/GkD5+TI5V/oGQvK dfnjeGRZ89ZnmwPv7fidRJ2KqSTo6JHZ2WnBBQXxt3cM9LbgDI7m6N5Qac1Ww896x/ll yRn+oSyqhWEphl2CyI/ILxenkjkumEknxrpESAY7NdWFUHq77n47VudS9RuS8EQ9KkdI jOhw== X-Gm-Message-State: AOAM533xGgKEfJPLqJ5z22cEAJKS4JVlOXlAC2tOSAhLsO+pzWeet5QK Nr1wqgY1yvCM1ZZo9ZY5gekq5MCO+5hsdgb37vcZLw== X-Google-Smtp-Source: ABdhPJwUsLfTa42RaUov5RS9fCUqXRqbdlJQej/TrtDN+wj6yAPMSmvLGq5UUPagfhJbUp/7Nt/oEtRA+7wRJzCr0AE= X-Received: by 2002:a05:620a:d84:: with SMTP id q4mr37009939qkl.610.1638273855141; Tue, 30 Nov 2021 04:04:15 -0800 (PST) MIME-Version: 1.0 References: <20211130095727.2378739-1-elver@google.com> In-Reply-To: <20211130095727.2378739-1-elver@google.com> From: Alexander Potapenko Date: Tue, 30 Nov 2021 13:03:37 +0100 Message-ID: Subject: Re: [PATCH] lib/stackdepot: always do filter_irq_stacks() in stack_depot_save() To: Marco Elver Cc: Andrew Morton , Andrey Ryabinin , Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , Vijayanand Jitta , "Gustavo A. R. Silva" , Imran Khan , linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, Chris Wilson , Jani Nikula , Mika Kuoppala , dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 30, 2021 at 11:14 AM Marco Elver wrote: > > The non-interrupt portion of interrupt stack traces before interrupt > entry is usually arbitrary. Therefore, saving stack traces of interrupts > (that include entries before interrupt entry) to stack depot leads to > unbounded stackdepot growth. > > As such, use of filter_irq_stacks() is a requirement to ensure > stackdepot can efficiently deduplicate interrupt stacks. > > Looking through all current users of stack_depot_save(), none (except > KASAN) pass the stack trace through filter_irq_stacks() before passing > it on to stack_depot_save(). > > Rather than adding filter_irq_stacks() to all current users of > stack_depot_save(), it became clear that stack_depot_save() should > simply do filter_irq_stacks(). > > Signed-off-by: Marco Elver Reviewed-by: Alexander Potapenko > --- > lib/stackdepot.c | 13 +++++++++++++ > mm/kasan/common.c | 1 - > 2 files changed, 13 insertions(+), 1 deletion(-) > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index b437ae79aca1..519c7898c7f2 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -305,6 +305,9 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); > * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false,= avoids > * any allocations and will fail if no space is left to store the stack = trace. > * > + * If the stack trace in @entries is from an interrupt, only the portion= up to > + * interrupt entry is saved. > + * > * Context: Any context, but setting @can_alloc to %false is required if > * alloc_pages() cannot be used from the current context. Curre= ntly > * this is the case from contexts where neither %GFP_ATOMIC nor > @@ -323,6 +326,16 @@ depot_stack_handle_t __stack_depot_save(unsigned lon= g *entries, > unsigned long flags; > u32 hash; > > + /* > + * If this stack trace is from an interrupt, including anything b= efore > + * interrupt entry usually leads to unbounded stackdepot growth. > + * > + * Because use of filter_irq_stacks() is a requirement to ensure > + * stackdepot can efficiently deduplicate interrupt stacks, alway= s > + * filter_irq_stacks() to simplify all callers' use of stackdepot= . > + */ > + nr_entries =3D filter_irq_stacks(entries, nr_entries); > + > if (unlikely(nr_entries =3D=3D 0) || stack_depot_disable) > goto fast_exit; > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 8428da2aaf17..efaa836e5132 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -36,7 +36,6 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool= can_alloc) > unsigned int nr_entries; > > nr_entries =3D stack_trace_save(entries, ARRAY_SIZE(entries), 0); > - nr_entries =3D filter_irq_stacks(entries, nr_entries); > return __stack_depot_save(entries, nr_entries, flags, can_alloc); > } > > -- > 2.34.0.rc2.393.gf8c9666880-goog > --=20 Alexander Potapenko Software Engineer Google Germany GmbH Erika-Mann-Stra=C3=9Fe, 33 80636 M=C3=BCnchen Gesch=C3=A4ftsf=C3=BChrer: Paul Manicle, Halimah DeLaine Prado Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E798DC433F5 for ; Tue, 30 Nov 2021 12:21:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 014826E4D2; Tue, 30 Nov 2021 12:21:39 +0000 (UTC) Received: from mail-qk1-x72a.google.com (mail-qk1-x72a.google.com [IPv6:2607:f8b0:4864:20::72a]) by gabe.freedesktop.org (Postfix) with ESMTPS id 59E306E92A for ; Tue, 30 Nov 2021 12:04:16 +0000 (UTC) Received: by mail-qk1-x72a.google.com with SMTP id p4so26413110qkm.7 for ; Tue, 30 Nov 2021 04:04:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=0TDfEMK/KsWzDzMYVf16B8nf+1SscGhMhouVQsRYobA=; b=PyDcfZpXtYt3oGURyS+5a12jt0Y5bvbUlNrbCZz1w0WLKpTTZwftgbaHiJAsW0HSV3 lyURilx6RGKAAJfAPihxYa/9smThOFBYpttJdjv6/b/H/LHOfwcmpyDu47CI0ZvJtWjm Ok6By6cZC7JyccNda3BE+8h6F/VSvhttvpNrI9kz+KWq4D5cJ/BUx+NGSjJvbVcY7lr6 hDhzVfe2hER7ylpQrb7aJj4lq/RNWSqjQrGeiQDm3PTnPWSoX1xlpJzShhPjk8BuW+JN 6+DfjZ+xO+ZouKrFOppCMoTysuuE8ZFjZyOvgUsDqkOoOdhUPcBMl9W+wLP0ktUtfZ3o jfQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=0TDfEMK/KsWzDzMYVf16B8nf+1SscGhMhouVQsRYobA=; b=32j02kf+UzPIlAw9MWJwEtnvv84OXGlYFhrfcoRr/a95eNkEQtllR6qe5ofoqUUuvh XsrWIKBJYG+Q0mc+qCFpJmNEr+slZv7xV3C7mNBc5VxTySY5NYwBNfs5dqtMNJbRI703 yckT5+I2/gdirvGZPDR7NFVAKLH9XWa8S2wtwxJrwjDsE1xNKxlVhuD+XUHzFg84JGB5 +toLtyD4ry5mN8rWoG83ynjYBF+oppr/4apyrIHsdBBgKDY0A5CnoKlNrU/YGtep49h9 D2c4J/6bPmeu+sraEi5eKMZdu3g7Wvn9snZWxzZWdMtH9J3JF4YnUDFPBz4zuJiKtRPe janA== X-Gm-Message-State: AOAM531SNIIAwrnr04NmC+6PF9Ndgxgd86nn0AetjkLxWceX002EinOM CnRuhyIEUU/NgaRgBdQ64bO7cvpKctkRavBIG4FX9A== X-Google-Smtp-Source: ABdhPJwUsLfTa42RaUov5RS9fCUqXRqbdlJQej/TrtDN+wj6yAPMSmvLGq5UUPagfhJbUp/7Nt/oEtRA+7wRJzCr0AE= X-Received: by 2002:a05:620a:d84:: with SMTP id q4mr37009939qkl.610.1638273855141; Tue, 30 Nov 2021 04:04:15 -0800 (PST) MIME-Version: 1.0 References: <20211130095727.2378739-1-elver@google.com> In-Reply-To: <20211130095727.2378739-1-elver@google.com> From: Alexander Potapenko Date: Tue, 30 Nov 2021 13:03:37 +0100 Message-ID: Subject: Re: [PATCH] lib/stackdepot: always do filter_irq_stacks() in stack_depot_save() To: Marco Elver Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Mailman-Approved-At: Tue, 30 Nov 2021 12:21:37 +0000 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jani Nikula , Mika Kuoppala , dri-devel@lists.freedesktop.org, "Gustavo A. R. Silva" , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrey Ryabinin , Dmitry Vyukov , Imran Khan , Vijayanand Jitta , Andrew Morton , Chris Wilson , intel-gfx@lists.freedesktop.org, Vlastimil Babka , Andrey Konovalov Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" On Tue, Nov 30, 2021 at 11:14 AM Marco Elver wrote: > > The non-interrupt portion of interrupt stack traces before interrupt > entry is usually arbitrary. Therefore, saving stack traces of interrupts > (that include entries before interrupt entry) to stack depot leads to > unbounded stackdepot growth. > > As such, use of filter_irq_stacks() is a requirement to ensure > stackdepot can efficiently deduplicate interrupt stacks. > > Looking through all current users of stack_depot_save(), none (except > KASAN) pass the stack trace through filter_irq_stacks() before passing > it on to stack_depot_save(). > > Rather than adding filter_irq_stacks() to all current users of > stack_depot_save(), it became clear that stack_depot_save() should > simply do filter_irq_stacks(). > > Signed-off-by: Marco Elver Reviewed-by: Alexander Potapenko > --- > lib/stackdepot.c | 13 +++++++++++++ > mm/kasan/common.c | 1 - > 2 files changed, 13 insertions(+), 1 deletion(-) > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index b437ae79aca1..519c7898c7f2 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -305,6 +305,9 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); > * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false,= avoids > * any allocations and will fail if no space is left to store the stack = trace. > * > + * If the stack trace in @entries is from an interrupt, only the portion= up to > + * interrupt entry is saved. > + * > * Context: Any context, but setting @can_alloc to %false is required if > * alloc_pages() cannot be used from the current context. Curre= ntly > * this is the case from contexts where neither %GFP_ATOMIC nor > @@ -323,6 +326,16 @@ depot_stack_handle_t __stack_depot_save(unsigned lon= g *entries, > unsigned long flags; > u32 hash; > > + /* > + * If this stack trace is from an interrupt, including anything b= efore > + * interrupt entry usually leads to unbounded stackdepot growth. > + * > + * Because use of filter_irq_stacks() is a requirement to ensure > + * stackdepot can efficiently deduplicate interrupt stacks, alway= s > + * filter_irq_stacks() to simplify all callers' use of stackdepot= . > + */ > + nr_entries =3D filter_irq_stacks(entries, nr_entries); > + > if (unlikely(nr_entries =3D=3D 0) || stack_depot_disable) > goto fast_exit; > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 8428da2aaf17..efaa836e5132 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -36,7 +36,6 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool= can_alloc) > unsigned int nr_entries; > > nr_entries =3D stack_trace_save(entries, ARRAY_SIZE(entries), 0); > - nr_entries =3D filter_irq_stacks(entries, nr_entries); > return __stack_depot_save(entries, nr_entries, flags, can_alloc); > } > > -- > 2.34.0.rc2.393.gf8c9666880-goog > --=20 Alexander Potapenko Software Engineer Google Germany GmbH Erika-Mann-Stra=C3=9Fe, 33 80636 M=C3=BCnchen Gesch=C3=A4ftsf=C3=BChrer: Paul Manicle, Halimah DeLaine Prado Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 03972C433F5 for ; Tue, 30 Nov 2021 14:25:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 01FB06E288; Tue, 30 Nov 2021 14:25:51 +0000 (UTC) Received: from mail-qk1-x734.google.com (mail-qk1-x734.google.com [IPv6:2607:f8b0:4864:20::734]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4CD0F6E920 for ; Tue, 30 Nov 2021 12:04:16 +0000 (UTC) Received: by mail-qk1-x734.google.com with SMTP id t83so26407502qke.8 for ; Tue, 30 Nov 2021 04:04:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=0TDfEMK/KsWzDzMYVf16B8nf+1SscGhMhouVQsRYobA=; b=PyDcfZpXtYt3oGURyS+5a12jt0Y5bvbUlNrbCZz1w0WLKpTTZwftgbaHiJAsW0HSV3 lyURilx6RGKAAJfAPihxYa/9smThOFBYpttJdjv6/b/H/LHOfwcmpyDu47CI0ZvJtWjm Ok6By6cZC7JyccNda3BE+8h6F/VSvhttvpNrI9kz+KWq4D5cJ/BUx+NGSjJvbVcY7lr6 hDhzVfe2hER7ylpQrb7aJj4lq/RNWSqjQrGeiQDm3PTnPWSoX1xlpJzShhPjk8BuW+JN 6+DfjZ+xO+ZouKrFOppCMoTysuuE8ZFjZyOvgUsDqkOoOdhUPcBMl9W+wLP0ktUtfZ3o jfQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=0TDfEMK/KsWzDzMYVf16B8nf+1SscGhMhouVQsRYobA=; b=bsbxSS2BXylqZsAVaYuC4dVqSPRp+zqkOUl7eN4WBT6pSfhFx6jgdtaBnOKKFN8OyP jwjErz07Nicg8vdTYwG8QMtJHCkUOWPqZv9q+RwWlpf7V/eJoVfiRyIC4WkbxyUai0ZE 84tFAYIqTCXFAmE3XT1qZT4xVulScmIOLD1Zt6Wv5sdjx76g/vBeGVEN2eCd7r6K1Y5X UIisnH0j3H3uiXHQNHge1e5+mnbTyR/MXTeGoQfCSOFBHUAXgl5rNh1gOcwRDGnJ6yJP dtzAPBbn0/ExHVud3KygR/GID5f/haEfWjFwtPmR+BjQyLSJUiE3Bo6U++Myz3Q7CPDK 1Xdg== X-Gm-Message-State: AOAM530MWWnvLte/V6ApyPLuiJiru/c5lvjGuozmfbHGcrxsiyu+DO4O za/ohBvY8cnwjhmoU0nIjIyjUGlamJoXtndoHnpWhg== X-Google-Smtp-Source: ABdhPJwUsLfTa42RaUov5RS9fCUqXRqbdlJQej/TrtDN+wj6yAPMSmvLGq5UUPagfhJbUp/7Nt/oEtRA+7wRJzCr0AE= X-Received: by 2002:a05:620a:d84:: with SMTP id q4mr37009939qkl.610.1638273855141; Tue, 30 Nov 2021 04:04:15 -0800 (PST) MIME-Version: 1.0 References: <20211130095727.2378739-1-elver@google.com> In-Reply-To: <20211130095727.2378739-1-elver@google.com> From: Alexander Potapenko Date: Tue, 30 Nov 2021 13:03:37 +0100 Message-ID: To: Marco Elver Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Mailman-Approved-At: Tue, 30 Nov 2021 14:25:49 +0000 Subject: Re: [Intel-gfx] [PATCH] lib/stackdepot: always do filter_irq_stacks() in stack_depot_save() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jani Nikula , dri-devel@lists.freedesktop.org, "Gustavo A. R. Silva" , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrey Ryabinin , Dmitry Vyukov , Imran Khan , Vijayanand Jitta , Andrew Morton , Chris Wilson , intel-gfx@lists.freedesktop.org, Vlastimil Babka , Andrey Konovalov Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On Tue, Nov 30, 2021 at 11:14 AM Marco Elver wrote: > > The non-interrupt portion of interrupt stack traces before interrupt > entry is usually arbitrary. Therefore, saving stack traces of interrupts > (that include entries before interrupt entry) to stack depot leads to > unbounded stackdepot growth. > > As such, use of filter_irq_stacks() is a requirement to ensure > stackdepot can efficiently deduplicate interrupt stacks. > > Looking through all current users of stack_depot_save(), none (except > KASAN) pass the stack trace through filter_irq_stacks() before passing > it on to stack_depot_save(). > > Rather than adding filter_irq_stacks() to all current users of > stack_depot_save(), it became clear that stack_depot_save() should > simply do filter_irq_stacks(). > > Signed-off-by: Marco Elver Reviewed-by: Alexander Potapenko > --- > lib/stackdepot.c | 13 +++++++++++++ > mm/kasan/common.c | 1 - > 2 files changed, 13 insertions(+), 1 deletion(-) > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index b437ae79aca1..519c7898c7f2 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -305,6 +305,9 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); > * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false,= avoids > * any allocations and will fail if no space is left to store the stack = trace. > * > + * If the stack trace in @entries is from an interrupt, only the portion= up to > + * interrupt entry is saved. > + * > * Context: Any context, but setting @can_alloc to %false is required if > * alloc_pages() cannot be used from the current context. Curre= ntly > * this is the case from contexts where neither %GFP_ATOMIC nor > @@ -323,6 +326,16 @@ depot_stack_handle_t __stack_depot_save(unsigned lon= g *entries, > unsigned long flags; > u32 hash; > > + /* > + * If this stack trace is from an interrupt, including anything b= efore > + * interrupt entry usually leads to unbounded stackdepot growth. > + * > + * Because use of filter_irq_stacks() is a requirement to ensure > + * stackdepot can efficiently deduplicate interrupt stacks, alway= s > + * filter_irq_stacks() to simplify all callers' use of stackdepot= . > + */ > + nr_entries =3D filter_irq_stacks(entries, nr_entries); > + > if (unlikely(nr_entries =3D=3D 0) || stack_depot_disable) > goto fast_exit; > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 8428da2aaf17..efaa836e5132 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -36,7 +36,6 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool= can_alloc) > unsigned int nr_entries; > > nr_entries =3D stack_trace_save(entries, ARRAY_SIZE(entries), 0); > - nr_entries =3D filter_irq_stacks(entries, nr_entries); > return __stack_depot_save(entries, nr_entries, flags, can_alloc); > } > > -- > 2.34.0.rc2.393.gf8c9666880-goog > --=20 Alexander Potapenko Software Engineer Google Germany GmbH Erika-Mann-Stra=C3=9Fe, 33 80636 M=C3=BCnchen Gesch=C3=A4ftsf=C3=BChrer: Paul Manicle, Halimah DeLaine Prado Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg